Welcome¶
GINO - GINO Is Not ORM - is a lightweight asynchronous ORM built on top of SQLAlchemy core for Python asyncio. Now (early 2018) GINO supports only one dialect asyncpg.
- Free software: BSD license
- Requires: Python 3.5
- Source code: https://github.com/fantix/gino
- Community: https://gitter.im/python-gino/Lobby
This documentation is still under development. Please excuse the WIP pages.
Contents¶
Get Started¶
This tutorial helps beginners to get started with the basic part of GINO. Target audiences of this tutorial should have basic knowledge of:
- RDBMS, especially PostgreSQL
- Asynchronous programming in Python
Knowledge of SQLAlchemy is not required.
Introduction¶
Simply speaking, GINO helps you write and execute raw SQL in your asynchronous application. Instead of interacting RDBMS directly with raw SQL, you can access your data through friendly objective API.
You may not need GINO, or else to say asynchronous database connection, because it adds quite some complexity and risk to your stack, and it won’t make your code run faster, if not slower. Please read Why Asynchronous ORM? for more information.
Installation¶
Note
GINO optionally depends on aiocontextvars for sharing connection between method calls or chained coroutines without passing the connection object over and over again. It is highly recommended for most projects, unless you truly need a bare environment and handle connections manually.
Important
aiocontextvars will be replaced by contextvars once it supports asyncio. And neither will be needed in Python 3.7 which is delivered with a builtin contextvars module.
Stable release¶
To install GINO, run this command in your terminal:
$ pip install gino aiocontextvars
This is the preferred method to install GINO, as it will always install the most recent stable release.
If you don’t have pip installed, this Python installation guide can guide you through the process.
From sources¶
The sources for GINO can be downloaded from the Github repo.
You can either clone the public repository:
$ git clone git://github.com/fantix/gino
Or download the tarball:
$ curl -OL https://github.com/fantix/gino/tarball/master
Once you have a copy of the source, you can install it with:
$ python setup.py install
Declare Models¶
First of all, we’ll need a Gino
object, usually under the
name of db
as a global variable:
from gino import Gino
db = Gino()
db
acts like a reference to the database, most database interactions will
go through it.
“Model” is a basic concept in GINO, it is a Python class inherited from
db.Model
. Each Model
subclass maps to one table in the database, while each object of the class
represents one row in the table. This must feel familiar if ORM is not a
strange word to you. Now let’s declare a model:
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer(), primary_key=True)
nickname = db.Column(db.Unicode(), default='noname')
By declaring this User
class, we are actually defining a database table
named users
, with two columns id
and nickname
. Note that the fixed
__tablename__
property is required. GINO
suggests singular for model names, and plural for table names. Each
db.Column
property defines one column for
the table, where its first parameter indicates the column type in database,
while the rest is for other column attributes or constraints. You can find a
mapping of database types to db
types here in the SQLAlchemy
documentation.
Note
SQLAlchemy is a powerful ORM library for non-asynchronous programming in Python, on top of which GINO is built. SQLAlchemy supports many popular RDBMS including PostgreSQL and MySQL through different dialect implementation, so that the same Python code can be compiled into different SQL depending on the dialect you choose. GINO inherited this support too, but for now there is only one dialect for PostgreSQL through asyncpg.
If you need constraints or indexes covering multiple columns these are also defined using properties in model classes. The property names must be unique, but are otherwise not used. Example:
class Booking(db.Model):
__tablename__ = 'bookings'
day = db.Column(db.Date)
booker = db.Column(db.String)
room = db.Column(db.String)
_pk = db.PrimaryKeyConstraint('day', 'booker', name='bookings_pkey')
_idx1 = db.Index('bookings_idx_day_room', 'day', 'room', unique=True)
_idx2 = db.Index('bookings_idx_booker_room', 'booker', 'room')
It is also possible to define model constraints and indexes outside the model class if that is preferred. For more details on constraints and indexes, see here in the SQLAlchemy documentation.
Due to implementation limitations it is currently not allowed to specify
explicit constraints and indexes as direct attributes in classes that are meant
to be subclassed. The same is true for constraints and indexes specified
through the __table_args__
attribute. In order
to e.g. define constraints in mixin classes,
declared_attr()
is required. Please feel free to read
more about it in its API documentation.
Get Connected¶
The declaration only defined the mapping, it does not create the actual table in the database. To do that, we need to get connected first. Let’s create a PostgreSQL database for this tutorial:
$ createdb gino
Then we tell our db
object to connect to this database:
import asyncio
async def main():
await db.set_bind('postgresql://localhost/gino')
asyncio.get_event_loop().run_until_complete(main())
If this runs successfully, then you are connected to the newly created database.
Here postgresql
indicates the database dialect to use (the default driver
is asyncpg
, you can explicitly specify that with postgresql+asyncpg://
,
or simply asyncpg://
), localhost
is where the server is, and gino
is the name of the database. Check here for more
information about how to compose this database URL.
Note
Under the hood set_bind()
calls
create_engine()
and bind the engine to this db
object. GINO
engine is similar to SQLAlchemy engine, but not identical. Because GINO
engine is asynchronous, while the other is not. Please refer to the API
reference of GINO for more information.
Now that we are connected, let’s create the table in database (in the same
main()
method):
await db.gino.create_all()
Warning
It is db.gino.create_all
,
not db.create_all
, because
db
is inherited from SQLAlchemy MetaData
,
and db.create_all
is from
SQLAlchemy using non-asynchronous methods, which doesn’t work with the
bound GINO engine.
In practice create_all()
is usually
not an ideal solution. To manage database schema, tool like Alembic is
recommended.
If you want to explicitly disconnect from the database, you can do this:
await db.pop_bind().close()
Let’s review the code we have so far together in one piece before moving on:
import asyncio
from gino import Gino
db = Gino()
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer(), primary_key=True)
nickname = db.Column(db.Unicode(), default='noname')
async def main():
await db.set_bind('postgresql://localhost/gino')
await db.gino.create_all()
# further code goes here
await db.pop_bind().close()
asyncio.get_event_loop().run_until_complete(main())
CRUD Operations¶
In order to operate on the database, one of GINO’s core features is to Create, Retrieve, Update or Delete model objects, also known as the CRUD operations.
Create¶
Let’s start by creating a User
:
user = await User.create(nickname='fantix')
# This will cause GINO to execute this SQL with parameter 'fantix':
# INSERT INTO users (nickname) VALUES ($1) RETURNING users.id, users.nickname
As mentioned previously, user
object represents the newly created row in
the database. You can get the value of each columns by the declared column
properties on the object:
print(f'ID: {user.id}') # 1
print(f'Nickname: {user.nickname}') # fantix
It is also possible to create a model instance in-memory first, modify it, then finally create it in the database:
user = User(nickname='fantix')
user.nickname += ' (founder)'
await user.create()
Retrieve¶
To retrieve a model object from database by primary key, you can use the class
method get()
on the model class. Now let’s retrieve
the same row:
user = await User.get(1)
# SQL (parameter: 1):
# SELECT users.id, users.nickname FROM users WHERE users.id = $1
Normal SQL queries are done through a class property
query
. For example, let’s retrieve all User
objects from database as a list:
all_users = await db.all(User.query)
# SQL:
# SELECT users.id, users.nickname FROM users
Alternatively, you can use the gino
extension on
query
. This has exactly the same effect as above:
all_users = await User.query.gino.all()
# SQL:
# SELECT users.id, users.nickname FROM users
Note
User.query
is actually a SQLAlchemy query, with its own
non-asynchronous execution methods. GINO added this gino
extension on
all executable SQLAlchemy clause objects to conveniently execute them in
the asynchronous way, so that it is even not needed to import the db
reference for execution.
Now let’s add some filters. For example, find all users with ID lower than 10:
founding_users = await User.query.where(User.id < 10).gino.all()
# SQL (parameter: 10):
# SELECT users.id, users.nickname FROM users WHERE users.id < $1
Read more here about writing queries, because the query object is exactly from SQLAlchemy core.
Warning
Once you get a model object, it is purely in memory and fully detached from the database. That means, if the row is externally updated, the object values remain unchanged. Likewise, changes made to the object won’t affect the database values.
Also, GINO keeps no track of model objects, therefore getting the same row twice returns two different object with identical values. Modifying one does not magically affect the other one.
Different than traditional ORMs, the GINO model objects are more like objective SQL results, rather than stateful ORM objects. In order to adapt for asynchronous programming, GINO is designed to be that simple. That’s also why GINO Is Not ORM.
Sometimes we want to get only one object, for example getting the user by name when logging in. There’s a shortcut for this scenario:
user = await User.query.where(User.nickname == 'fantix').gino.first()
# SQL (parameter: 'fantix'):
# SELECT users.id, users.nickname FROM users WHERE users.nickname = $1
If there is no user named “fantix” in database, user
will be None
.
And sometimes we may want to get a single value from database, getting the name
of user with ID 1 for example. Then we can use the
select()
class method:
name = await User.select('nickname').where(User.id == 1).gino.scalar()
# SQL (parameter: 1):
# SELECT users.nickname FROM users WHERE users.id = $1
print(name) # fantix
Or get the count of all users:
population = await db.func.count(User.id).gino.scalar()
# SQL:
# SELECT count(users.id) AS count_1 FROM users
print(population) # 17 for example
Update¶
Then let’s try to make some modifications. In this example we’ll mixin some retrieve operations we just tried.
# create a new user
user = await User.create(nickname='fantix')
# get its name
name = await User.select('nickname').where(
User.id == user.id).gino.scalar()
assert name == user.nickname # they are both 'fantix' before the update
# modification here
await user.update(nickname='daisy').apply()
# SQL (parameters: 'daisy', 1):
# UPDATE users SET nickname=$1 WHERE users.id = $2 RETURNING users.nickname
print(user.nickname) # daisy
# get its name again
name = await User.select('nickname').where(
User.id == user.id).gino.scalar()
print(name) # daisy
assert name == user.nickname # they are both 'daisy' after the update
So update()
is the first GINO method we met so far
on model instance level. It accepts multiple keyword arguments, whose keys are
column names while values are the new value to update to. The following
apply()
call makes the update happen in database.
Note
GINO explicitly split the in-memory update and SQL update into two methods:
update()
and
apply()
. update()
will update the in-memory model object and return an
UpdateRequest
object which contains all the
modifications. A following apply()
on
UpdateRequest
object will apply these recorded
modifications to database by executing a compiled SQL.
Tip
UpdateRequest
object has another method named
update()
which works the same as the one
on model object, just that it combines the new modifications together with
the ones already recorded in current UpdateRequest
object, and it returns the same UpdateRequest
object.
That means, you can chain the updates and end up with one
apply()
, or make use of the
UpdateRequest
object to combine several updates in a
batch.
update()
on model object affects only the row
represented by the object. If you want to do update with wider condition, you
can use the update()
on model class level, with a
bit difference:
await User.update.values(nickname='Founding Member ' + User.nickname).where(
User.id < 10).gino.status()
# SQL (parameter: 'Founding Member ', 10):
# UPDATE users SET nickname=($1 || users.nickname) WHERE users.id < $2
name = await User.select('nickname').where(
User.id == 1).gino.scalar()
print(name) # Founding Member fantix
There is no UpdateRequest
here, everything is again
SQLAlchemy clause, its
documentation here for
your reference.
Delete¶
At last. Deleting is similar to updating, but way simpler.
user = await User.create(nickname='fantix')
await user.delete()
# SQL (parameter: 1):
# DELETE FROM users WHERE users.id = $1
print(await User.get(user.id)) # None
Hint
Remember the model object is in memory? In the last print()
statement, even though the row is already deleted in database, the object
user
still exists with its values untouched.
Or mass deletion (never forget the where clause, unless you want to truncate the whole table!!):
await User.delete.where(User.id > 10).gino.status()
# SQL (parameter: 10):
# DELETE FROM users WHERE users.id > $1
With basic CRUD, you can already make some amazing stuff with GINO. This tutorial ends here, please find out more in detail from the rest of this documentation, and have fun hacking!
Schema Declaration¶
There are 3 ways to declare your database schema to be used with GINO. Because
GINO is built on top of SQLAlchemy core, either way you are actually declaring
SQLAlchemy Table
.
GINO Engine¶
This is the minimized way to use GINO - using only
GinoEngine
(and GinoConnection
too), everything else are vanilla SQLAlchemy core. This is useful when you have
legacy code written in SQLAlchemy core, in need of porting to asyncio. For new
code please use the other two.
For example, the table declaration is the same as SQLAlchemy core tutorial:
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
metadata = MetaData()
users = Table(
'users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
Column('fullname', String),
)
addresses = Table(
'addresses', metadata,
Column('id', Integer, primary_key=True),
Column('user_id', None, ForeignKey('users.id')),
Column('email_address', String, nullable=False)
)
Note
When using GINO Engine only, it is usually your own business to create the
tables with either create_all()
on a
normal non-async SQLAlchemy engine, or using Alembic. However it is still
possible to be done with GINO if it had to:
import gino
from gino.schema import GinoSchemaVisitor
async def main():
engine = await gino.create_engine('postgresql://...')
await GinoSchemaVisitor(metadata).create_all(engine)
Then, construct queries, in SQLAlchemy core too:
ins = users.insert().values(name='jack', fullname='Jack Jones')
So far, everything is still in SQLAlchemy. Now let’s get connected and execute the insert:
async def main():
engine = await gino.create_engine('postgresql://localhost/gino')
conn = await engine.acquire()
await conn.status(ins)
print(await conn.all(users.select()))
# Outputs: [(1, 'jack', 'Jack Jones')]
Here create_engine()
creates a GinoEngine
,
then acquire()
checks out a
GinoConnection
, and
status()
executes the insert and returns the
status text. This works similarly as SQLAlchemy
execute()
- they take the same parameters
but return a bit differently. There are also other similar query APIs:
all()
returns a list ofRowProxy
first()
returns oneRowProxy
, orNone
scalar()
returns a single value, orNone
iterate()
returns an asynchronous iterator which yieldsRowProxy
Please go to their API for more information.
GINO Core¶
In previous scenario, GinoEngine
must not be set to
metadata.bind
because it is not a
regular SQLAlchemy Engine thus it won’t work correctly. For this, GINO provides
a subclass of MetaData
as Gino
,
usually instantiated globally under the name of db
. It can be used as a
normal MetaData
still offering some conveniences:
- It delegates most public types you can access on
sqlalchemy
- It works with both normal SQLAlchemy engine and asynchronous GINO engine
- It exposes all query APIs on
GinoConnection
level - It injects two
gino
extensions on SQLAlchemy query clauses and schema items, allowing short inline execution likeusers.select().gino.all()
- It is also the entry for the third scenario, see later
Then we can achieve previous scenario with less code like this:
from gino import Gino
db = Gino()
users = db.Table(
'users', db,
db.Column('id', db.Integer, primary_key=True),
db.Column('name', db.String),
db.Column('fullname', db.String),
)
addresses = db.Table(
'addresses', db,
db.Column('id', db.Integer, primary_key=True),
db.Column('user_id', None, db.ForeignKey('users.id')),
db.Column('email_address', db.String, nullable=False)
)
async def main():
async with db.with_bind('postgresql://localhost/gino'):
await db.gino.create_all()
await users.insert().values(
name='jack',
fullname='Jack Jones',
).gino.status()
print(await users.select().gino.all())
# Outputs: [(1, 'jack', 'Jack Jones')]
Similar to SQLAlchemy core and ORM, this is GINO core. All tables and queries
are still made of SQLAlchemy whose rules still apply, but sqlalchemy
seems
never imported. This is useful when ORM is unwanted.
Tip
asyncpgsa does the same thing, but in a conceptually reversed way - instead of having asyncpg work for SQLAlchemy, it made SQLAlchemy work for asyncpg (GINO used to be in that way too because GINO is inspired by asyncpgsa). Either way works fine, it’s just a matter of taste of whose API style to use, SQLAlchemy or asyncpg.
GINO ORM¶
If you want to further reduce the length of code, and taking a bit risk of implicity, welcome to the ORM world. Even though GINO made itself not quite a traditional ORM by being simple and explict to safely work with asyncio, common ORM concepts are still valid - a table is a model class, a row is a model instance. Still the same example rewritten in GINO ORM:
from gino import Gino
db = Gino()
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String)
fullname = db.Column(db.String)
class Address(db.Model):
__tablename__ = 'addresses'
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(None, db.ForeignKey('users.id'))
email_address = db.Column(db.String, nullable=False)
async def main():
async with db.with_bind('postgresql://localhost/gino'):
await db.gino.create_all()
await User.create(name='jack', fullname='Jack Jones')
print(await User.query.gino.all())
# Outputs: [<User object at 0x10a8ba860>]
Important
The __tablename__
is a mandatory field to define a concrete model.
As you can see, the declaration is pretty much the same as before. Underlying
they are identical, declaring two tables in db
. The class
style is just
more declarative. Instead of users.c.name
, you can now access the column by
User.name
. The implicitly created Table
is
available at User.__table__
and Address.__table__
. You can use anything
that works in GINO core here.
Tip
db.Model
is a dynamically created parent class for your models. It is
associated with the db
on initialization, therefore the table is put in
the very db
when you declare your model class.
Things become different when it comes to CRUD. You can use model level methods
to directly create()
a model instance, instead of
inserting a new row. Or delete()
a model instance
without needing to specify the where clause manually. Query returns model
instances instead of RowProxy
, and row values are
directly available as attributes on model instances. See also: CRUD.
After all, GinoEngine
is always in use. Next let’s dig
more into it.
Engine and Connection¶
GinoEngine
is the core of GINO. It acts like a pool of
connections but also does the work of assembling everyone together:

Under the hood, engine is associated with a specific dialect instance on creation, e.g. asyncpg dialect. The dialect is actually a set of classes that implements GINO dialect API, offering all the details about how to operate on this specific database. In the diagram, gray color means internal, while green means touchable by end users.
During creation, the engine will also ask the dialect to create a database connection pool for it. The pool type is also a part of the dialect API, because asynchronous database drivers usually have their own pool implementation, thus their GINO dialects should hide such implementation differences behind the unified diagram API for engine to use.
Note
In SQLAlchemy, database drivers are supposed to follow the DB-API standard, which does not usually provide a pool implementation. Therefore, SQLAlchemy has its own pool implementation, created directly in engine. This is where this diagram doesn’t fit SQLAlchemy.
The pool creates raw connections, not the GinoConnection
green in the diagram. The connection in the diagram is a many-to-one wrapper of
the raw connection, because of the reuse and lazy features, we’ll get to that
part later. The connection is created by the engine, thus inherits the same
dialect, and is used for running queries.
On the outer side, SQLAlchemy queries can be executed directly on the engine or connection. When on engine, it will try to acquire a reusable connection to actually execute the connection, and release the connection after use.
Note
Another difference to SQLAlchemy here: GINO execution methods always return final results, while in SQLAlchemy accessing the result may cause further implicit database accesses. Therefore GINO engine immediately releases the connection when the execution method on the engine returns, but SQLAlchemy can only release the connection implicitly when the result data is found exhausted.
By immediately releasing a connection, GINO may not release the related raw connection when the raw connection was reused from another parent connection. We’ll get to this later.
GINO also supports implicit execution
without having to specify an engine or connection explicitly. This is done by
binding the engine to the db
instance, also known as the
MetaData
or the Gino
instance.
You may possibly bind a GinoConnection
instance, but that
is greatly not recommended because it is very much untested.
At last, as the ORM / CRUD feature, models are just add-ons on top of
everything else to generate queries. The parent model class is connected to a
db
instance on creation, therefore the models can do implicit execution too
if their db
has a bind.
Then let’s get to some details.
Creating Engines¶
GINO reuses the strategy system SQLAlchemy provides to create engines. The name
of GINO’s strategy to create asynchronous GinoEngine
is
just gino
, but only available after gino
is imported:
import gino, sqlalchemy
async def main():
e = await sqlalchemy.create_engine('postgresql://...', strategy='gino')
# e is a GinoEngine
Tip
Please read this SQLAlchemy document to learn about writing database URLs.
Also the GINO strategy replaces the default driver of dialect postgresql://
from psycopg2
to asyncpg
, so that you don’t have to replace the URL
as it may be shared between GINO and vanilla SQLAlchemy in parallel.
Alternatively, you can explicitly specify the driver to use by
postgresql+asyncpg://...
or just asyncpg://...
.
GINO also offers a shortcut as gino.create_engine()
, which only sets the
default strategy to gino
and does nothing more. So here is an identical
example:
import gino
async def main():
e = await gino.create_engine('postgresql://...')
# e is also a GinoEngine
As you may have noticed, when using the GINO strategy,
create_engine()
returns a coroutine, which must be awaited
for result. Because it will create a database connection pool behind the scene,
and actually making a few initial connections by default.
For it is just SQLAlchemy create_engine()
, the same rules of
parameters apply in GINO too. Well for now, GINO only supports a small amount
of all the parameters listed in SQLAlchemy document (we are working on it!):
For Dialect:
For Engine:
While these parameters are discarded by GINO:
In addition, keyword arguments for creating the underlying pool is accepted
here. In the case of asyncpg, they are from create_pool()
.
For example, we can create an engine without initial connections:
e = await gino.create_engine('postgresql://...', min_size=0)
Similar to SQLAlchemy, GINO also provides shortcut to create engine while setting it as a bind. In SQLAlchemy it is like this:
import sqlalchemy
metadata = sqlalchemy.MetaData()
metadata.bind = 'postgresql://...'
# or in short
metadata = sqlalchemy.MetaData('postgresql://...')
This implicitly calls create_engine()
under the hood. However
in GINO, creating an engine requires await
, it can no longer be hidden
behind a normal assignment statement. Therefore, GINO removed the assignment
magic in subclass Gino
, reverted it to simple assignment:
import gino
db = gino.Gino()
async def main():
# db.bind = 'postgresql://...' doesn't work!! It sets a string on bind
engine = await gino.create_engine('postgresql://...')
db.bind = engine
And provided a shortcut to do so:
engine = await db.set_bind('postgresql://...')
And another simpler shortcut for one-time usage:
db = await gino.Gino('postgresql://...')
To unset a bind and close the engine:
engine, db.bind = db.bind, None
await engine.close()
Or with a shortcut correspondingly:
await engine.pop_bind().close()
Furthermore, the two steps can be combined into one shortcut with asynchronous context manager:
async with db.with_bind('postgresql://...') as engine:
# your code here
Managing Connections¶
With a GinoEngine
at hand, you can acquire connections
from the pool now:
conn = await engine.acquire()
Don’t forget to release it after use:
await conn.release()
Yes this can be easily missing. The recommended way is to use the asynchronous context manager:
async with engine.acquire() as conn:
# play with the connection
Here conn
is a GinoConnection
instance. As mentioned
previously, GinoConnection
is mapped to an underlying raw
connection, as shown in following diagram:

Each column has at most one actual raw connection, and the number is the
sequence the connections are created in this example. It is designed this way
so that GINO could offer two features for connection management: reuse
and
lazy
. They are keyword arguments on acquire()
and by default switched off.
reuse¶
When acquiring a GinoConnection
(2), GINO will borrow a
raw connection (1) from the underlying pool first, and assign it to this
GinoConnection
(2). This is the default behavior of
acquire()
with no arguments given. Even when
you are nesting two acquires, you still get two actual raw connection
borrowed:
async with engine.acquire() as conn1:
async with engine.acquire() as conn2:
# conn2 is a completely different connection than conn1
But sometimes conn2
may exist in a different method:
async def outer():
async with engine.acquire() as conn1:
await inner()
async def inner():
async with engine.acquire() as conn2:
# ...
And we probably wish inner
could reuse the same raw connection in
outer
to save some resource, or borrow a new one if inner
is
individually called without outer
:
async def outer():
async with engine.acquire() as conn1:
await inner(conn1)
async def inner(conn2=None):
if conn2 is None:
async with engine.acquire() as conn2:
# ...
else:
# the same ... again
This is exactly the scenario reuse
could be useful. We can simply tell the
acquire()
to reuse the most recent reusable
connection in current context by setting reuse=True
, as presented in this
identical example:
async def outer():
async with engine.acquire() as conn1:
await inner(conn1)
async def inner():
async with engine.acquire(reuse=True) as conn2:
# ...
Back to previous diagram, the blue GinoConnection
instances (3, 4, 6) are “reusing connections” acquired with reuse=True
,
while the green ones (2, 5, 7) are not, thus they become “reusable
connections”. The green reusable connections are put in a stack in current
context, so that acquire(reuse=True)
always reuses the most recent
connection at the top of the stack. For example, (3) and (4) reuse the only
available (2) at that moment, therefore (2, 3, 4) all map to the same raw
connection (1). Then after (5), (6) no longer reuses (2) because (5) is now the
new head of the stack.
Tip
By context, we are actually referring to the context concept in either
aiocontextvars the optional
dependency or contextvars the new module in
upcoming Python 3.7. Simply speaking, you may treat a function call chain
including awaited Task
created in the chain as in the
same context, something like a thread local in asyncio.
Note
And that is to say, aiocontextvars is a required dependency for
reuse
to work correctly in Python 3.6 - actually reuse
is the
reason for introducing context in the first place. Without context, the
stack is always empty for any acquire()
thus
no one could reuse any raw connection at all.
GinoConnection
(2) may be created through
acquire(reuse=True)
too - because the stack is empty before (2), there is
nothing to reuse, so (2) upgraded itself to a reusable connection.
Releasing a reusing connection won’t cause the reused raw connection being
returned to the pool, only directly releasing the reused
GinoConnection
can do so. Connections should be released
in the reversed order as they are acquired, but if the reused connection is
released before reusing connections by accident, then all the reusing
connections depending on it will turn closed because they are reusing the same
raw connection which is returned to the pool, any further execution will fail.
For example, if (3) is released first, then (2) and (4) are still functional.
But if (2) is released first, then (3) and (4) will be released implicitly and
are no longer usable any more.
lazy¶
As you may have found, GinoConnection
(5) does not have
an underlying raw connection, even when it is reused by (6). This is because
both (5) and (6) set lazy=True
on acquire.
A lazy connection will not borrow a raw connection on creation, it will only do
so when have to, e.g. when executing a query or starting a transaction. For
example, GinoConnection
(7) is acquired lazily without a
raw connection, and (8) is only created when a query is executed on (7):
async with engine.acquire(lazy=True) as conn: # (7)
await conn.scalar('select now()') # (8)
On implementation level, lazy
is extremely easy in
acquire()
: if lazy=False
then borrow a raw
connection, else do nothing. That’s it. Before executing a query or starting a
transaction, GinoConnection
will always try to borrow a
raw connection if there is none present. This allows GINO to “transiently
release” a raw connection, while all GinoConnection
mapped to this raw connection are put in lazy mode (again). This is especially
useful before you need to run some networking tasks in a database-related
context - the networking task may take a long time to finish, we don’t want to
waste a connection resource checked out for nothing. For example:
async with engine.acquire(lazy=True) as conn: # (7)
await conn.scalar('select now()') # (8)
await conn.release(permanent=False) # release (8)
await asyncio.sleep(10) # simulate long I/O work
await conn.scalar('select now()') # re-acquire a new raw connection,
# not necessarily the same (8)
When used together with reuse
, at most one raw connection may be borrowed
for one reusing chain. For example, executing queries on both (5) and (6) will
result only one raw connection checked out, no matter which executes first. It
is also worth noting that, if we set lazy=False
on (6), then the raw
connection will be immediately borrowed on acquire, and shared between both (5)
and (6). It’s been quite a while, let me post the same diagram again:

reusable¶
Usually, you don’t have to worry about the two options reuse
and lazy
,
using the default acquire()
will always create
a concrete GinoConnection
with a new raw connection with
it. It is only that they are by default reusable (the green ones). If you need
an absolutely isolated unique connection that has no risk being reused, you may
use reusable=False
on acquire. As shown in the diagram, the unreusable
GinoConnection
is an orphan away from any stack:
async with engine.acquire(): # (2)
async with engine.acquire(reusable=False): # the unreusable connection
async with engine.acquire(reuse=True): # (3)
Unreusable connections can be lazy. But it is usually meaningless to specify
both reuse=True
and reusable=False
at the same time, because reusing
connections are always unusable - they are also not in the stack. You cannot
reuse a reusing connection, you only reuse a reusable connection in the stack.
Making a reusing connection unreusable doesn’t make its related reusable
connection unreusable. Hmm if this is getting more confusing, just don’t use
acquire(reuse=True, reusable=False)
unless you know what it does.
current_connection¶
Except for all scenarios supported by above three options, there is still one
left out: we may want to acquire a reusing-only connection. There is no such
option to do so, but GINO could do the same thing through
current_connection
which is always the reusable
GinoConnection
at the top of current stack, or None
if current stack is empty.
Tip
The different between current_connection
and acquire(reuse=True)
is, the
latter always produces a GinoConnection
, while the
former may not.
Executing Queries¶
Once you have a GinoConnection
instance, you can start
executing queries with it. There are 4 variants of the execute method:
all()
,
first()
,
scalar()
and
status()
. They are basically the same:
accepting the same parameters, calling the same underlying methods. The
difference is how they treat the results:
all()
returns all results in alist
, which may be empty when the query has no result, empty but still alist
.first()
returns the first result directly, orNone
if there is no result at all. There is usually some optimization behind the scene to efficiently get only the first result, instead of loading the full result set into memory.scalar()
is similar tofirst()
, it returns the first value of the first result. Quite convenient to just retrieve a scalar value from database, likeNOW()
,MAX()
,COUNT()
or whatever generates a single value.None
is also returned when there is no result, it is up to you how to distinguish no result and the first value isNULL
.status()
executes the query and discard all the query results at all. Instead it returns the execution status line as it is, usually a textual string. Note, there may be no optimization to only return the status without loading the results, so make your query generate nothing if you don’t want any result.
By “result”, I meant RowProxy
of SQLAlchemy - an
immutable row instance with both tuple
and dict
interfaces.
Database values are translated twice before they are eventually stored in a
RowProxy
: first by the database driver (dialect)
from network payload to Python objects (see Type Conversion of
how asyncpg does this), second by SQLAlchemy
result_processor()
depending on the actual
type and dialect.
The arguments taken by these 4 methods are identical to the ones accepted by
SQLAlchemy execute()
(click to read more),
usually a plain string of SQL directly or a SQLAlchemy query clause, followed
by query parameters. In the case when multiple dictionaries are given to
multiparams
, all 4 methods will always return None
discarding all
results. Likewise, the parameter values are processed twice too: first by
bind_processor()
then the database driver.
GINO also supports SQLAlchemy
execution_options()
provided either on
engine level
,
connection level
or on
queries
. At
the moment we are working on being compatible with SQLAlchemy execution
options. In the mean while, GINO provides several new execution options, for
example enabling return_model
and providing a model
will make
all()
and
first()
return ORM model instance(s) instead
of RowProxy
instance(s). See also
execution_options()
for more information.
In addition, GINO has an iterate()
method to
traverse the query results progressively, instead of loading all the results at
once. This method takes the same arguments as the other 4 execute methods do,
and follows the same rule of data handling. For now with asyncpg, this creates
a server-side cursor.
Implicit Execution¶
Acquire a GinoConnection
and execute queries on it, that
is the most explicit way. You can also execute queries on a
GinoEngine
instance. In this case, a connection will be
acquired with reuse=True
for you implicitly, and released after returning:
await engine.scalar('select now()')
Equals to:
async with engine.acquire(reuse=True) as conn:
await conn.scalar('select now()')
This allows you to easily write connectionless code. For example:
async def get_now():
return await engine.scalar('select now()')
async def main():
async with engine.acquire():
now = await get_now()
await engine.status('UPDATE ...')
In this example, main()
will take only one raw connection. get_now()
can also work alone out of any acquire()
context, thanks to reuse
.
Furthermore, GINO provides the same query APIs on Gino
directly. They are simply delegates to corresponding API methods on the
bind
. This allows even engine-less programming:
db = gino.Gino()
async def get_now():
return await db.scalar('select now()')
async def main():
async with db.with_bind('postgresql://...'):
now = await get_now()
await db.status('UPDATE ...')
Note
In this example we didn’t put the two queries in an acquire()
block, so
they might be executed in two different connections.
At last, the SQLAlchemy implicit execution
on queries also work in GINO, under an extension named gino
:
await users_table.select().gino.all()
By default, the extension GinoExecutor
is injected on
Executable
as a property of name gino
at the creation of Gino
instance. Therefore, any
Executable
object has the gino
property for implicit execution. Similarly, the execution methods calls the
corresponding ones on the bind
of the db
instance.
Warning
The assumption for code above to run as expected is having aiocontextvars installed on Python 3.6, or directly using Python 3.7. Or else, nested implicit executions will always run in a different new connection. This may be not so important here, but it is crucial for transactions.
Transaction¶
It is crucial to correctly manage transactions in an asynchronous program,
because you never know how much time an await
will actually wait for, it
will cause disasters if transactions are on hold for too long. GINO enforces
explicit transaction management to help dealing with it.
Basic usage¶
Transactions belong to GinoConnection
. The most common
way to use transactions is through an async with
statement:
async with connection.transaction() as tx:
await connection.status('INSERT INTO mytable VALUES(1, 2, 3)')
This guarantees a transaction is opened when entering the async with
block,
and closed when exiting the block - committed if exits normally, or rolled back
by exception. The underlying transaction instance from the database driver is
available at raw_transaction
, but in
most cases you don’t need to touch it.
GINO provides two convenient shortcuts to end the transaction early:
They will raise an internal exception to correspondingly commit or rollback the
transaction, thus the code within the async with
block after
raise_commit()
or
raise_rollback()
is skipped. The
internal exception is inherited from BaseException
so that normal try
... except Exception
block can’t trap it. This exception stops propagating at
the end of async with
block, so you don’t need to worry about handling it.
Transactions can also be started on a GinoEngine
:
async with engine.transaction() as tx:
await engine.status('INSERT INTO mytable VALUES(1, 2, 3)')
Here a GinoConnection
is borrowed implicitly before
entering the transaction, and guaranteed to be returned after transaction is
done. The GinoConnection
instance is accessible at
tx.connection
. Other than
that, everything else is the same.
Important
The implicit connection is by default borrowed with reuse=True
. That
means using transaction()
of
GinoEngine
within a connection context is the same as
calling transaction()
of the current
connection without having to reference it, no separate connection shall be
created.
Similarly, if your Gino
instance has a bind, you may also do
the same on it:
async with db.transaction() as tx:
await db.status('INSERT INTO mytable VALUES(1, 2, 3)')
Nested Transactions¶
Transactions can be nested, nested transaction will create a savepoint as for now on asyncpg. A similar example from asyncpg doc:
async with connection.transaction() as tx1:
await connection.status('CREATE TABLE mytab (a int)')
# Create a nested transaction:
async with connection.transaction() as tx2:
await connection.status('INSERT INTO mytab (a) VALUES (1), (2)')
# Rollback the nested transaction:
tx2.raise_rollback()
# Because the nested transaction was rolled back, there
# will be nothing in `mytab`.
assert await connection.all('SELECT a FROM mytab') == []
As you can see, the raise_rollback()
breaks only the async with
block of the specified tx2
, the outer
transaction tx1
just continued. What if we break the outer transaction from
within the inner transaction? The inner transaction context won’t trap the
internal exception because it recognizes the exception is not created upon
itself. Instead, the inner transaction context only follows the behavior to
either commit or rollback, and lets the exception propagate.
Because of the default reusing behavior, transactions on engine or db
follows the same nesting rules. Please see
GinoTransaction
for more information.
Manual Control¶
Other than using async with
, you can also manually control the
transaction:
tx = await db.transaction()
try:
await db.status('INSERT INTO mytable VALUES(1, 2, 3)')
await tx.commit()
except Exception:
await tx.rollback()
raise
You can’t use raise_commit()
or
raise_rollback()
here, similarly it is
prohibited to use commit()
and
rollback()
in an async with
block.
CRUD¶
THIS IS A WIP
Relationships¶
As for now (April 2018) GINO has no full support for relationships. For one thing, we are still trying to find a decent way implementing relationships, for another, we insist explicit code style in asynchronous programming and that conflicts with some usual ORM relationship patterns. Still, GINO doesn’t stop you from using relationships in the database through foreign keys or whatever magic, and gradually provides more features to support doing so.
Model Loader¶
The Model Loader is the magic behind GINO CRUD to translate database rows into model objects. Through CRUD, Model Loaders are assembled internally for you, you can still use it directly. For example, an ordinary query that returns rows may look like this:
query = db.select([User])
rows = await query.gino.all()
In order to load rows into User
objects, you can provide an execution
option loader
with a new ModelLoader
instance:
from gino.loader import ModelLoader
query = db.select([User])
query = query.execution_options(loader=ModelLoader(User))
users = await query.gino.all()
The ModelLoader
would then load each database row into a
User
object. As this is frequently used, GINO made it a shortcut:
query = db.select([User])
query = query.execution_options(loader=User.load())
users = await query.gino.all()
And another shortcut:
query = db.select([User])
query = query.execution_options(loader=User)
users = await query.gino.all()
Tip
User
as loader
is transformed into ModelLoader(User)
by
Loader.get()
, explained later in “Loader
Expression”.
And again:
query = db.select([User])
users = await query.gino.load(User).all()
This is identical to the normal CRUD query:
users = await User.query.gino.all()
Loader Expression¶
So Loaders are actually row post-processors, they define how the database rows
should be processed and returned. Other than ModelLoader
,
there’re also other loaders that could turn the database rows into different
results like based on your definition. GINO provides the Loader Expression
feature for you to easily assemble complex loaders.
Tip
This is less relevant to relationships, please skip to the next section if it’s not helpful for you.
Here is an example using all loaders at once:
uid, user, sep, cols = await db.select([User]).gino.load(
(
User.id,
User,
'|',
lambda row, ctx: len(row),
)
).first()
Let’s check this piece by piece. Overall, the argument of
load()
is a tuple. This is interpreted into a
TupleLoader
, with each item of the tuple interpreted as a
Loader Expression recursively. That means, it is possible to nest tuples. The
result of a TupleLoader
is a tuple.
Column
in Loader Expressions are interpreted as
ColumnLoader
. It simply outputs the value of the given
column in the database row. It is your responsibility to select the column in
the query. Please note, ColumnLoader
uses the given
column as index to look for the value, not the name of the column. This is a
SQLAlchemy feature to support selecting multiple columns with the same name
from different tables in the same query, especially for ORM. So if you are
using raw textual SQL and wishing to use ColumnLoader
,
you’ll have to declare columns for the query:
now = db.Column('time', db.DateTime())
result = await db.first(db.text(
'SELECT now() AT TIME ZONE \'UTC\''
).columns(
now,
).gino.load(
('now:', now)
).first()
print(result) # now: 2018-04-08 08:23:02.431847
Let’s get back to previous example. The second item in the tuple is a GINO
model class. As we’ve presented previously, it is interpreted into a
ModelLoader
. By default, it loads the values of all the
columns of the give model, and create a new model instance with the values.
Tip
For a complex loader expression, the same row is given to all loaders, so
it doesn’t matter User.id
is already used before the model loader.
The last item in the tuple is a callable, it will be called for each row with
two arguments: the first argument is the row itself, while the second is a
contextual value provided by outer loader, we’ll get to that later. Similar to
map()
, the return value of the call will be the loaded result.
At last, if none of the above types matches a Loader Expression, it will be
treated as is. Like the '|'
separator, it will show up as the third item
in every result returned by the query.
Many-to-One Relationship¶
A classic many-to-one relationship is also known as referencing - the model on the “many” end keeps a single reference to the model on the “one” end. Although GINO does not enforce it, usually people use a foreign key for the reference:
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
So every child has a single parent (or no parent at all), while one parent may have multiple children. GINO provides an easy way to load children with their parents:
async for child in Child.load(parent=Parent).gino.iterate():
print(f'Parent of {child.id} is {child.parent.id}')
As you may have noticed, Child.load
is exactly the shortcut to create
ModelLoader
in the very first example. With some
additional keyword arguments, Child.load(parent=Parent)
is still a
ModelLoader
for Child
, the model loader is at the
same time a query builder. It is identical to do this:
async for child in Child.load(parent=Parent).query.gino.iterate():
print(f'Parent of {child.id} is {child.parent.id}')
The query
dynamically generates a SQLAlchemy query
based on the knowledge of the loader, and set the loader as execution option at
the same time. The Loader
simply forwarded unknown
attributes to its query
, that’s why .query
can
be omitted.
For ModelLoader
, all keyword arguments are interpreted as
subloaders, their results will be set to the attributes of the result model
under the corresponding keys using setattr()
. For example, Parent
is
interpreted as ModelLoader(Parent)
which loads Parent
instances, and
Parent
instances are set as the parent
attribute of the outer Child
instance.
Warning
If multiple children references the same parent, then each child owns a unique parent instance with identical values.
Tip
You don’t have to define parent
attribute on Child
. But if you do,
you gain the ability to customize how parent is stored or retrieved. For
example, let’s store the parent instance as _parent
:
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
_parent = None
@property
def parent(self):
return self._parent
@parent.setter
def parent(self, value):
self._parent = value
The query builder works recursively. For ModelLoader
, it
uses LEFT OUTER JOIN
to connect the FROM
clauses, in order to achieve
many-to-one scenario. The ON
clause is determined automatically by foreign
keys. You can also customize the ON
clause in case there is no foreign key
(a promise is a promise):
loader = Child.load(parent=Parent.on(Child.parent_id == Parent.id))
async for child in loader.query.gino.iterate():
print(f'Parent of {child.id} is {child.parent.id}')
And subloaders can be nested:
subloader = Child.load(parent=Parent.on(Child.parent_id == Parent.id))
loader = Grandson.load(parent=subloader.on(Grandson.parent_id == Child.id))
By now, GINO supports only loading many-to-one joined query. To modify a relationship, just modify the reference column values.
Self Referencing¶
Warning
Experimental feature.
Self referencing is usually used to create a tree-like structure. For example:
class Category(db.Model):
__tablename__ = 'categories'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('categories.id'))
In order to load leaf categories with their parents, an alias is needed:
Parent = Category.alias()
Then the query would be something like this:
parents = db.select([Category.parent_id])
query = Category.load(parent=Parent.on(
Category.parent_id == Parent.id
)).where(
~Category.id.in_(db.select([Category.alias().parent_id]))
)
async for c in query.gino.iterate():
print(f'Leaf: {c.id}, Parent: {c.parent.id}')
The generated SQL looks like this:
SELECT categories.id, categories.parent_id, categories_1.id, categories_1.parent_id
FROM categories LEFT OUTER JOIN categories AS categories_1
ON categories.parent_id = categories_1.id
WHERE categories.id NOT IN (
SELECT categories_2.parent_id
FROM categories AS categories_2
)
Other Relationships¶
GINO 0.7.4 introduced an experimental distinct feature to reduce a result set with loaders, combining rows under specified conditions. This made it possible to build one-to-many relationships. Using the same parent-child example above, we could load distinct parents with all their children:
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
def __init__(self, **kw):
super().__init__(**kw)
self._children = set()
@property
def children(self):
return self._children
@children.setter
def add_child(self, child):
self._children.add(child)
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
query = Child.outerjoin(Parent).select()
parents = await query.gino.load(
Parent.distinct(Parent.id).load(add_child=Child)).all()
Here the query is still child outer-joining parent, but the loader is loading
parent instances with distinct IDs only, while storing all their children
through the add_child
setter property. In detail for each row, a parent
instance is firstly loaded if no parent instance with the same ID was loaded
previously, or the same parent instance will be reused. Then a child instance
is loaded from the same row, and fed to the possibly reused parent instance by
parent.add_child = new_child
.
Distinct loaders can be nested to load hierarchical data, but it cannot be used as a query builder to automatically generate queries.
GINO provides no additional support for one-to-one relationship - the user should make sure that the query produces rows of distinct instance pairs, and load them with regular GINO model loaders. When in doubt, the distinct feature can be used on both sides, but you’ll have to manually deal with the conflict if more than one related instances are found. For example, we could keep only the last child for each parent:
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
def __init__(self, **kw):
super().__init__(**kw)
self._child = None
@property
def child(self):
return self._child
@child.setter
def child(self, child):
self._child = child
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
query = Child.outerjoin(Parent).select()
parents = await query.gino.load(
Parent.distinct(Parent.id).load(child=Child.distinct(Child.id))).all()
Similarly, you can build many-to-many relationships in the same way:
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
def __init__(self, **kw):
super().__init__(**kw)
self._children = set()
@property
def children(self):
return self._children
@children.setter
def add_child(self, child):
self._children.add(child)
child._parents.add(self)
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
def __init__(self, **kw):
super().__init__(**kw)
self._parents = set()
@property
def parents(self):
return self._parents
class ParentXChild(db.Model):
__tablename__ = 'parents_x_children'
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
child_id = db.Column(db.Integer, db.ForeignKey('children.id'))
query = Parent.outerjoin(ParentXChild).outerjoin(Child).select()
parents = await query.gino.load(
Parent.distinct(Parent.id).load(add_child=Child.distinct(Child.id))).all()
Likewise, there is for now no way to modify the relationships automatically,
you’ll have to manually create, delete or modify ParentXChild
instances.
Advanced Topics¶
Why Asynchronous ORM?¶
Normally the answer is no, you don’t need an asynchronous ORM. Before moving on, you should read this blog post from Mike Bayer, the author of SQLAlchemy. Though it was written before the project uvloop, but his points are still valid now:
- Async is usually not essential for stereotypical database logic;
- Async is slower, not faster, measured in single-routine.
Next, we’ll roughly go through what is asynchronous I/O again, and its pros/cons in practice, as well as when asynchronous ORM can be helpful.
The Story¶
Let’s say we want to build a search engine. We’ll use a single core computer to build our index. To make things simpler, our tasks are to fetch web pages (I/O operation), and process their content (CPU operation). Each task looks like this:

We have lots of web pages to index, so we simply handle them one by one:

We assume the time of each task is constant. Within 1 second, 2 tasks are done. So we can say, the throughput of current system is 2 tasks/sec. How can we improve the throughput? An obvious answer is to add more CPU cores:

This simply doubles our throughput to 4 tasks/sec, and linearly scales as we adding more CPU cores, if the network is not a bottleneck. But can we improve the throughput for each CPU core? The answer is yes, we can use multi-threading:

Wait a second here, 2 threads barely finished 6 tasks in 2 seconds, the throughput is only 2.7 tasks/sec, much lower than 4 tasks/sec with 2 cores. What’s wrong with multi-threading? From the diagram we can see:
- There are yellow bars taking up extra time.
- The green bars can still overlap with any bar in the other thread, but
- non-green bars cannot overlap with non-green bars in the other thread.
The yellow bars are time taken by context switches, a technique to allow multiple threads or processes to run on a single CPU core concurrently. Because one CPU core can do only one thing at a time (let’s assume a world without Hyper-threading or something like that), so in order to run several threads concurrently, the CPU must split its time into small slices, and run a little bit of each thread with these slices. The yellow bar is the very cost for CPU to switch its context to run a different thread. The scale is a bit dramatic, but it helps with the point.
Wait again here, the green bars are overlapping between threads, the CPU is doing two things at the same time? No, the CPU is doing nothing in the middle of the green bar, because it’s waiting for the HTTP response (I/O). That’s why multi-threading could improve the throughput to 2.7, instead of making it worse to 1.7 tasks/sec. You may try in real to run CPU-intensive tasks with multi-threading on single core, there won’t be any improvement. Like the multiplexed red bars (in practice there might be more context switches depending on the task), they seems to be running at the same time, but the total time for all to finish is actually longer than running each of them one by one. That’s also why this is called concurrency instead of parallelism.
Foreseeably as adding more threads, the increase of throughput will slow down, or even get decreasing, because context switches are wasting too much time, not to mention the extra memory footprint taken by new threads. It is usually not quite practical to have tens of thousands of threads running on a single CPU core. But is it possible to have tens of thousands of I/O-bound tasks to run concurrently on a single CPU core somehow? This is the once-famous C10k problem, usually solved by asynchronous I/O:

Note
Asynchronous I/O and coroutine are two different things, but they usually work together. Here we shall not follow too deep into the rabbit hole of low-level asynchronous I/O, and stay with coroutines for its simplicity.
Awesome! The throughput is 3.7 tasks/sec, almost as good as 4 tasks/sec of 2 CPU cores. Though this is not real data, comparing to OS threads, coroutines do take much less context switch time and memory footprint, thus made it an ideal option for the C10k problem.
Cooperative multitasking¶
So what is coroutine?
In the last diagram above, you may have noticed one difference comparing to all the other previous diagrams: the green bars are overlapping within the same thread. That is because the last diagram is using asynchronous I/O, while the rest are using blocking I/O. Like its naming, blocking I/O will block the thread until the I/O result is ready, thus there can be only one blocking I/O operation running in a thread. To achieve concurrency, blocking I/O has to go for multi-threading or multi-processing. Oppositely, asynchronous I/O allows thousands (or even more) of concurrent I/O reads and writes within the same thread, each I/O operation only blocks one coroutine instead of the whole thread. Like threads, coroutine here is a way to organize concurrency with asynchronous I/O.
Threads are scheduled by the operating system in an approach called preemptive multitasking. For example in previous multi-threading diagram, there was only one CPU core. When Thread 2 tried to start processing the first web page content, Thread 1 hadn’t finished processing its own. The OS brutally interrupted Thread 1 and shared some resource (time) for Thread 2. But Thread 1 also needed CPU time to finish its processing at the same time, so in turn after a while the OS had to pause Thread 2 and resume Thread 1. Depending on the size of the task, such turns may happen several times, so that every thread may have a fair chance to run. It is something like this:
Thread 1: I wanna run!
OS: Okay, here you go...
Thread 2: I wanna run!
OS: Urh, alright one sec ... Thread 1, hold on for a while!
Thread 1: Well I'm not done yet, but you are the boss.
OS: It won't be long. Thread 2 it's your turn now.
Thread 2: Yay! (&%#$@..+*&#)
Thread 1: Can I run now?
OS: Just a moment please ... Thread 2, give it a break!
Thread 2: Alright ... but I really need the CPU.
OS: You'll have it later. Thread 1, hurry up!
Differently, coroutines are scheduled by themselves cooperatively with the help of an event manager. The event manager lives in the same thread as the coroutines, it is interestingly the opposite to the OS scheduler for threads: while OS scheduler pauses threads, coroutines pauses themselves; thread knows when it wants to run, but coroutines don’t - only the event manager knows which coroutine should run. The event manager may only trigger the next coroutine to run, after the previous coroutine yields control to wait for an event (e.g. wait for an HTTP response). This approach to achieve concurrency is called cooperative multitasking. It’s like this:
Coroutine 1: Let me know when event A arrives. I'm done here before that.
Event manager: Okay. What about you, coroutine 2?
Coroutine 2: Um I've got nothing to do here before event B.
Event manager: Cool, I'll be watching.
Event manager: (after a while) Hey coroutine 1, event A is here!
Coroutine 1: Awesome! Let me see ... looks good, but I need event C now.
Event manager: Very well. Seems event B arrived just now, coroutine 2?
Coroutine 2: Oh wonderful! Let me store it in a file ... There! I'm all done.
Event manager: Sweet! Since there's no sign of event C yet, I'll sleep for a while.
(silence)
Event manager: Damn, event C timed out!
Coroutine 1: Arrrrh gotta kill myself with an exception :S
Event manager: Up to you :/
For coroutines, a task cannot be paused externally, only the task itself could pause from within. When there are a lot of coroutines, concurrency depends on each of them shall pause from time to time to wait for events. If you wrote a coroutine that never pauses, it allows no concurrency at all when running. On the other hand, you should feel safe in the code between pauses, because no other coroutines could run at the same time to mess up shared states. That’s why in previous last diagram, the red bars are not interlaced like threads.
Tip
In Python and asyncio, async def
declares coroutines, await
yields
control to event loop (event manager).
Pros and cons¶
Asynchronous I/O may handle tens of thousands of concurrent I/O operations in the same thread. This may save a lot of time from context switching, and memory from multi-threading. Therefore if you are dealing with lots of I/O-bound tasks concurrently, asynchronous I/O could efficiently use limited CPU and memory to deliver greater throughput.
With coroutines, you can naturally write sequential code that is cooperatively scheduled. If your business logic is complex, coroutines could greatly improve readability of asynchronous I/O code.
However for single task, asynchronous I/O is actually making it slower. For a
simple recv()
operation for example, blocking I/O would just block and
return result, but it needs quite some steps in asynchronous I/O: register for
the read event, wait until event arrives, try to recv()
, repeat until
result returns, feed the result to a callback at last. With coroutines, the
framework cost is even larger. Thanks to uvloop this cost has been minimized
in Python, still it is overhead comparing to raw blocking I/O.
And, asynchronous I/O is unpredictable in time, because of its cooperative
nature. For example, in a coroutine you want to sleep for 1 second. But another
coroutine took the control and ran for 2 seconds. When we get back to the
former coroutine, it is already 2 seconds later. Therefore, sleep(1)
means
to wait for at least 1 second. In practice, you should try your best to make
sure that all code between await
should finish ASAP, being literally
cooperative. Still, there can be code beyond control, so it is important to
keep the uncertainty in mind all the time.
At last, asynchronous programming is complicated, it’s easier said than done. Debugging is a tough job too. Especially when a whole team is working on the same piece of asynchronous code, it could easily go wrong. Therefore, a general suggestion is, use asynchronous I/O carefully for I/O-bound high concurrency scenarios only. It’s not a drop-in replacement for performance boost, but more like a sharp blade for concurrency with two edges. And if you are dealing with deadline-intensive tasks, think again to be sure.
About Database and ORM¶
Finally, GINO. We assume a scenario that asynchronous I/O is anyway required for the server itself, regardless of how we handle database.
Now that we know asynchronous I/O is for I/O intensive tasks. But isn’t it I/O intensive to frequently talk to a remote database? It depends. Like Mike said, “intensive” is relative to your actual code. Modern databases are super fast and reliable, network is reliable if put in LAN, therefore if actual database access time is of the minority of the program, it is not I/O intensive. Using asynchronous I/O for database in this case could not improve throughput much, or even make it worse due to asynchronous framework overhead as we mentioned. It looks easier to just use blocking database operations in your coroutines instead without harming performance.
But there is a high risk to cause dead locks. For example, the first coroutine starts a transaction and updated a row, then the second coroutine tries to update the same row before the first coroutine closes the transaction. The second coroutine will block the whole thread at the non-async update, waiting for the row lock to be released, but the releasing is in the first coroutine which is blocked by the second coroutine. Thus it will block forever.
This may happen even if you optimized all database interactions to be as quickly as possible. Racing condition just happens under pressure, and anything that may block will eventually block. Therefore, don’t call blocking methods in coroutines, ever. (Unless you are 100% sure it won’t cause a dead lock)
A simple fix would be to defer the database operations into threads, so that they won’t block the main thread, thus won’t cause a dead lock easily. It usually works and there is even a library to do so. However when it comes to ORM, things become dirty.
Traditional ORMs are usually implicit for convenience, lazy property loading
for example. In a larger project, you never know which statement has a side
effect to make an implicit database call, and block the main thread. Since you
cannot put only the underlying database access into the thread pool (you need
to await
on the deferred database call), you’ll start putting pieces of
code into the thread pool. But coroutines run only in the main thread, your
code starts to fall apart. This is usually the time when I suggest to separate
the server into two parts: “normal blocking with ORM” and “asynchronous without
ORM”.
Eventually this is where GINO can be useful: convenience of database abstraction is wanted in a classic asynchronous context. And thanks to asyncpg, the asynchronous overhead is by far still buried in its incredible performance boost.
Sanic Support¶
THIS IS A WIP
Work with Sanic¶
Using the Sanic extension, the request handler acquires a lazy connection on each request, and return the connection when the response finishes by default.
The lazy connection is actually established if necessary, i.e. just before first access to db.
This behavior is controlled by app.config.DB_USE_CONNECTION_FOR_REQUEST, which is True by default.
Supported configurations:
- DB_HOST
- DB_PORT
- DB_USER
- DB_PASSWORD
- DB_DATABASE
- DB_ECHO
- DB_POOL_MIN_SIZE
- DB_POOL_MAX_SIZE
- DB_USE_CONNECTION_FOR_REQUEST
An example server:
from sanic import Sanic
from sanic.exceptions import abort
from sanic.response import json
from gino.ext.sanic import Gino
app = Sanic()
app.config.DB_HOST = 'localhost'
app.config.DB_DATABASE = 'gino'
db = Gino()
db.init_app(app)
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.BigInteger(), primary_key=True)
nickname = db.Column(db.Unicode())
def __repr__(self):
return '{}<{}>'.format(self.nickname, self.id)
@app.route("/users/<user_id>")
async def get_user(request, user_id):
if not user_id.isdigit():
abort(400, 'invalid user id')
user = await User.get_or_404(int(user_id))
return json({'name': user.nickname})
if __name__ == '__main__':
app.run(debug=True)
Sanic Support¶
To integrate with Sanic, a few configurations needs to be set in
app.config
(with default value though):
- DB_HOST: if not set,
localhost
- DB_PORT: if not set,
5432
- DB_USER: if not set,
postgres
- DB_PASSWORD: if not set, empty string
- DB_DATABASE: if not set,
postgres
- DB_ECHO: if not set,
False
- DB_POOL_MIN_SIZE: if not set, 5
- DB_POOL_MAX_SIZE: if not set, 10
An example:
from sanic import Sanic
from gino.ext.sanic import Gino
app = Sanic()
app.config.DB_HOST = 'localhost'
app.config.DB_USER = 'postgres'
db = Gino()
db.init_app(app)
After db.init_app
, a connection pool with configured settings shall be
created and bound to db
when Sanic server is started, and closed on stop.
Furthermore, a lazy connection context is created on each request, and released
on response. That is to say, within Sanic request handlers, you can directly
access db by e.g. User.get(1)
, everything else is settled: database pool is
created on server start, connection is lazily borrowed from pool on the first
database access and shared within the rest of the same request handler, and
automatically returned to the pool on response.
Please be noted that, in the async world, await
may block unpredictably for
a long time. When this world is crossing RDBMS pools and transactions, it is
a very dangerous bite for performance, even causing disasters sometimes.
Therefore we recommend, during the time enjoying fast development, do pay
special attention to the scope of transactions and borrowed connections, make
sure that transactions are closed as soon as possible, and connections are not
taken for unnecessarily long time. As for the Sanic support, if you want to
release the concrete connection in the request context before response is
reached, just do it like this:
await request['connection'].release()
Or if you prefer not to use the contextual lazy connection in certain handlers,
prefer explicitly manage the connection lifetime, you can always borrow a new
connection by setting reuse=False
:
async with db.acquire(reuse=False):
# new connection context is created
Or if you prefer not to use the builtin request-scoped lazy connection at all, you can simply turn it off:
app.config.DB_USE_CONNECTION_FOR_REQUEST = False
Tornado Support¶
THIS IS A WIP
Frequently Asked Questions¶
ORM or not ORM?¶
GINO does perform the Object-Relational Mapping work under the Data Mapper Pattern, but it is just not a traditional ORM. The Objects in GINO are completely stateless from database - they are pure plain Python objects in memory. Changing their attribute values does not make them “dirty” - or in a different way of thinking they are always “dirty”. Any access to database must be explicitly executed. Using GINO is more like making up SQL clauses with Models and Objects, executing them to make changes in database, or loading data from database and wrapping the results with Objects again. Objects are just row data containers, you are still dealing with SQL which is represented by Models and SQLAlchemy core grammars. Besides GINO can be used in a completely non-ORM way.
How to join?¶
GINO invented none about making up queries, everything for that is inherited from SQLAlchemy. Therefore you just need to know how to write join in SQLAlchemy. Especially, Ádám made some amazing upgrades in GINO #113 to make join easier, so that you can use model classes directly as if they are tables in joining:
results = await User.join(Book).select().gino.all()
How to connect to database through SSL?¶
It depends on the dialect and database driver. For asyncpg, keyword arguments
on asyncpg.connect()
are directly
available on create_engine()
or db.set_bind()
. Therefore, enabling SSL is rather easy:
engine = await gino.create_engine(..., ssl=True)
Transaction cannot rollback changes?¶
As for now, make sure aiocontextvars is installed in order to use contextual transactions like this:
async with db.transaction():
await MyModel.create(name='xxx')
Or else if you’d prefer not to install an additional dependency, you’ll have to modify the code to explicitly use the correct connection:
async with db.transaction() as tx:
await MyModel.create(name='xxx', bind=tx.connection)
Tip
Since GINO 0.7.4, aiocontextvars
became a required dependency.
API Reference¶
gino package¶
Subpackages¶
gino.dialects package¶
Submodules¶
gino.dialects.asyncpg module¶
-
class
gino.dialects.asyncpg.
AsyncEnum
(*enums, **kw)[source]¶ Bases:
sqlalchemy.dialects.postgresql.base.ENUM
Construct an
ENUM
.Arguments are the same as that of
types.Enum
, but also including the following parameters.Parameters: create_type – Defaults to True. Indicates that
CREATE TYPE
should be emitted, after optionally checking for the presence of the type, when the parent table is being created; and additionally thatDROP TYPE
is called when the table is dropped. WhenFalse
, no check will be performed and noCREATE TYPE
orDROP TYPE
is emitted, unlesscreate()
ordrop()
are called directly. Setting toFalse
is helpful when invoking a creation scheme to a SQL file without access to the actual database - thecreate()
anddrop()
methods can be used to emit SQL to a target bind.New in version 0.7.4.
-
class
gino.dialects.asyncpg.
AsyncpgCompiler
(dialect, statement, column_keys=None, inline=False, **kwargs)[source]¶ Bases:
sqlalchemy.dialects.postgresql.base.PGCompiler
Construct a new
SQLCompiler
object.Parameters: - dialect –
Dialect
to be used - statement –
ClauseElement
to be compiled - column_keys – a list of column names to be compiled into an INSERT or UPDATE statement.
- inline – whether to generate INSERT statements as “inline”, e.g. not formatted to return any generated defaults
- kwargs – additional keyword arguments to be consumed by the superclass.
-
bindtemplate
¶
- dialect –
-
class
gino.dialects.asyncpg.
AsyncpgCursor
(context, cursor)[source]¶ Bases:
gino.dialects.base.Cursor
-
class
gino.dialects.asyncpg.
AsyncpgDBAPI
[source]¶ Bases:
gino.dialects.base.BaseDBAPI
-
Error
= (<class 'asyncpg.exceptions._base.PostgresError'>, <class 'asyncpg.exceptions._base.InterfaceError'>)¶
-
-
class
gino.dialects.asyncpg.
AsyncpgDialect
(*args, **kwargs)[source]¶ Bases:
sqlalchemy.dialects.postgresql.base.PGDialect
,gino.dialects.base.AsyncDialectMixin
-
colspecs
= {<class 'sqlalchemy.sql.sqltypes.Interval'>: <class 'sqlalchemy.dialects.postgresql.base.INTERVAL'>, <class 'sqlalchemy.sql.sqltypes.Enum'>: <class 'gino.dialects.asyncpg.AsyncEnum'>, <class 'sqlalchemy.sql.sqltypes.ARRAY'>: <class 'sqlalchemy.dialects.postgresql.array.ARRAY'>, <class 'sqlalchemy.sql.sqltypes.JSON.JSONPathType'>: <class 'sqlalchemy.dialects.postgresql.json.JSONPathType'>, <class 'sqlalchemy.sql.sqltypes.JSON'>: <class 'sqlalchemy.dialects.postgresql.json.JSON'>, <class 'sqlalchemy.dialects.postgresql.base.ENUM'>: <class 'gino.dialects.asyncpg.AsyncEnum'>, <class 'sqlalchemy.sql.sqltypes.NullType'>: <class 'gino.dialects.asyncpg.GinoNullType'>}¶
-
cursor_cls
¶ alias of
DBAPICursor
-
dbapi_class
¶ alias of
AsyncpgDBAPI
-
driver
= 'asyncpg'¶
-
execution_ctx_cls
¶ alias of
AsyncpgExecutionContext
-
coroutine
get_isolation_level
(connection)[source]¶ Given an asyncpg connection, return its isolation level.
-
coroutine
has_sequence
(connection, sequence_name, schema=None)[source]¶ Check the existence of a particular sequence in the database.
Given a
Connection
object and a string sequence_name, return True if the given sequence exists in the database, False otherwise.
-
coroutine
has_table
(connection, table_name, schema=None)[source]¶ Check the existence of a particular table in the database.
Given a
Connection
object and a string table_name, return True if the given table (possibly within the specified schema) exists in the database, False otherwise.
-
init_kwargs
= {'command_timeout', 'connection_class', 'database', 'host', 'init', 'loop', 'max_cacheable_statement_size', 'max_cached_statement_lifetime', 'max_inactive_connection_lifetime', 'max_queries', 'max_size', 'min_size', 'passfile', 'password', 'port', 'server_settings', 'setup', 'ssl', 'statement_cache_size', 'timeout', 'user'}¶
-
on_connect
()[source]¶ return a callable which sets up a newly created DBAPI connection.
This is used to set dialect-wide per-connection options such as isolation modes, unicode modes, etc.
If a callable is returned, it will be assembled into a pool listener that receives the direct DBAPI connection, with all wrappers removed.
If None is returned, no listener will be generated.
-
coroutine
set_isolation_level
(connection, level)[source]¶ Given an asyncpg connection, set its isolation level.
-
statement_compiler
¶ alias of
AsyncpgCompiler
-
supports_native_decimal
= True¶
-
-
class
gino.dialects.asyncpg.
AsyncpgExecutionContext
[source]¶ Bases:
gino.dialects.base.ExecutionContextOverride
,sqlalchemy.dialects.postgresql.base.PGExecutionContext
-
class
gino.dialects.asyncpg.
DBAPICursor
(dbapi_conn)[source]¶ Bases:
gino.dialects.base.DBAPICursor
-
description
¶
-
-
class
gino.dialects.asyncpg.
GinoNullType
[source]¶ Bases:
sqlalchemy.sql.sqltypes.NullType
-
result_processor
(dialect, coltype)[source]¶ Return a conversion function for processing result row values.
Returns a callable which will receive a result row column value as the sole positional argument and will return a value to return to the user.
If processing is not necessary, the method should return
None
.Parameters: - dialect – Dialect instance in use.
- coltype – DBAPI coltype argument received in cursor.description.
-
-
class
gino.dialects.asyncpg.
Pool
(url, loop, **kwargs)[source]¶ Bases:
gino.dialects.base.Pool
-
raw_pool
¶
-
gino.dialects.base module¶
-
class
gino.dialects.base.
AsyncDialectMixin
[source]¶ Bases:
object
-
cursor_cls
¶ alias of
DBAPICursor
-
-
class
gino.dialects.base.
BaseDBAPI
[source]¶ Bases:
object
-
Error
¶ alias of
builtins.Exception
-
paramstyle
= 'numeric'¶
-
-
class
gino.dialects.base.
ExecutionContextOverride
[source]¶ Bases:
object
-
loader
¶
-
model
¶
-
return_model
¶
-
timeout
¶
-
Module contents¶
gino.ext package¶
Submodules¶
gino.ext.aiohttp module¶
-
class
gino.ext.aiohttp.
AiohttpStrategy
[source]¶ Bases:
gino.strategies.GinoStrategy
-
engine_cls
¶ alias of
GinoEngine
-
name
= 'aiohttp'¶
-
-
class
gino.ext.aiohttp.
Gino
(bind=None, model_classes=None, query_ext=True, schema_ext=True, ext=True, **kwargs)[source]¶ Bases:
gino.api.Gino
Support aiohttp.web server.
The common usage looks like this:
from aiohttp import web from gino.ext.aiohttp import Gino db = Gino() app = web.Application(middlewares=[db]) db.init_app(app)
By
init_app()
GINO subscribes to a few signals on aiohttp, so that GINO could use database configuration in aiohttpconfig.gino
to initialize the bound engine. The config includes:driver
- the database driver, default isasyncpg
.host
- database server host, default islocalhost
.port
- database server port, default is5432
.user
- database server user, default ispostgres
.password
- database server password, default is empty.database
- database name, default ispostgres
.dsn
- a SQLAlchemy database URL to create the engine, its existence will replace all previous connect arguments.pool_min_size
- the initial number of connections of the db pool.pool_max_size
- the maximum number of connections in the db pool.echo
- enable SQLAlchemy echo mode.
If the
db
is set as an aiohttp middleware, then a lazy connection is available atrequest['connection']
. By default, a database connection is borrowed on the first query, shared in the same execution context, and returned to the pool on response. If you need to release the connection early in the middle to do some long-running tasks, you can simply do this:await request['connection'].release(permanent=False)
Parameters: - bind – A
GinoEngine
instance to bind. Also accepts string orURL
, which will be passed tocreate_engine()
when thisGino
instance is awaited. Default isNone
. - model_classes – A
tuple
of base class and mixin classes to create theModel
class. Default is(CRUDModel, )
. - query_ext – Boolean value to control the installation of the
gino
extension onExecutable
for implicit execution. Default is to install (True
). - schema_ext – Boolean value to control the installation of the
gino
extension onSchemaItem
for implicit execution. Default is to install (True
). - ext – Boolean value to control the installation of the two
gino
extensions.False
for no extension at all, while it depends on the two individual switches when this is set toTrue
(default). - kwargs – Other arguments accepted by
MetaData
.
-
model_base_classes
= (<class 'gino.crud.CRUDModel'>, <class 'gino.ext.aiohttp.AiohttpModelMixin'>)¶
-
query_executor
¶ alias of
GinoExecutor
-
coroutine
set_bind
(bind, loop=None, **kwargs)[source]¶ Bind self to the given
GinoEngine
and return it.If the given
bind
is a string orURL
, all arguments will be sent tocreate_engine()
to create a new engine, and return it.Returns: GinoEngine
-
class
gino.ext.aiohttp.
GinoConnection
(dialect, sa_conn, stack=None)[source]¶ Bases:
gino.engine.GinoConnection
-
class
gino.ext.aiohttp.
GinoEngine
(dialect, pool, loop, logging_name=None, echo=None, execution_options=None)[source]¶ Bases:
gino.engine.GinoEngine
-
connection_cls
¶ alias of
GinoConnection
-
-
class
gino.ext.aiohttp.
GinoExecutor
(query)[source]¶ Bases:
gino.api.GinoExecutor
gino.ext.quart module¶
gino.ext.sanic module¶
-
class
gino.ext.sanic.
Gino
(app=None, *args, **kwargs)[source]¶ Bases:
gino.api.Gino
Support Sanic web server.
By
init_app()
GINO registers a few hooks on Sanic, so that GINO could use database configuration in Sanicconfig
to initialize the bound engine.A lazy connection context is enabled by default for every request. You can change this default behavior by setting
DB_USE_CONNECTION_FOR_REQUEST
config value toFalse
. By default, a database connection is borrowed on the first query, shared in the same execution context, and returned to the pool on response. If you need to release the connection early in the middle to do some long-running tasks, you can simply do this:await request['connection'].release(permanent=False)
-
model_base_classes
= (<class 'gino.crud.CRUDModel'>, <class 'gino.ext.sanic.SanicModelMixin'>)¶
-
query_executor
¶ alias of
GinoExecutor
-
coroutine
set_bind
(bind, loop=None, **kwargs)[source]¶ Bind self to the given
GinoEngine
and return it.If the given
bind
is a string orURL
, all arguments will be sent tocreate_engine()
to create a new engine, and return it.Returns: GinoEngine
-
-
class
gino.ext.sanic.
GinoConnection
(dialect, sa_conn, stack=None)[source]¶ Bases:
gino.engine.GinoConnection
-
class
gino.ext.sanic.
GinoEngine
(dialect, pool, loop, logging_name=None, echo=None, execution_options=None)[source]¶ Bases:
gino.engine.GinoEngine
-
connection_cls
¶ alias of
GinoConnection
-
-
class
gino.ext.sanic.
GinoExecutor
(query)[source]¶ Bases:
gino.api.GinoExecutor
-
class
gino.ext.sanic.
SanicStrategy
[source]¶ Bases:
gino.strategies.GinoStrategy
-
engine_cls
¶ alias of
GinoEngine
-
name
= 'sanic'¶
-
gino.ext.tornado module¶
GINO provides a convenient plugin for integrating with Tornado webserver. It consists of two parts, each of them is optional.
Warning
Tornado doesn’t wrap request handlers to asyncio tasks, hence
task locals doesn’t work in request handlers by default. To fix this,
you may either redefine _execute()
method on you handlers to wrap
request processing into a task, or simply use
gino.ext.tornado.AsyncioRequestHandler
as a base class for all of your handlers.
See integrate GINO with application and request handlers for more details.
GINO can provide a webframework-aware .get_or_404()
method which work
similar to .get()
but raises an appropriate error whenever requested object
not found. In case of tornado, an appropriate error is
tornado.web.HTTPError(404)
.
To have it working, simply use gino.ext.tornado.Gino
as your
database metadata.
In addition to .get_or_404()
, GINO provides bases for application and
request handler objects.
Inherit your application class from gino.ext.tornado.Application
to automate connection pool management and provide access to the database
object to all of your request handlers via self.application.db
.
Inherit your request handlers from
gino.ext.tornado.AsyncioRequestHandler
to enable task locals
support.
Inherit your request handlers from
gino.ext.tornado.GinoRequestHandler
to enable active connection
management.
Note that gino.ext.tornado.GinoRequestHandler
requires your
application to have a db
property with acquire
coroutine so its best to
use it with gino.ext.tornado.Application
.
GINO would define some options for database configuration. Use them with
the standard tornado.options
module:
'db_driver'
– if not set,asyncpg
;'db_host'
– if not set,localhost
;'db_port'
– if not set,5432
;'db_user'
– if not set,postgres
;'db_password'
– if not set, empty string;'db_database'
– if not set,postgres
;'db_echo'
– whether to enable SQLAlchemy echo mode, defaults to False.dsn
– a SQLAlchemy database URL to create the engine, its existence will replace all previous connect arguments.'db_pool_min_size'
– number of connection the pool will be initialized with. Default is5
;'db_pool_max_size'
– max number of connections in the pool. Default is10
;'db_pool_max_inactive_conn_lifetime'
– number of seconds after which inactive connections in the pool will be closed. Pass0
to disable this mechanism. Default is300
;'db_pool_max_queries '
– number of queries after a connection is closed and replaced with a new connection. Default is50000
.
A helloworld application that uses tornado and GINO may look like this:
import tornado.web
import tornado.ioloop
import tornado.options
import tornado.escape
from gino.ext.tornado import Gino, Application, GinoRequestHandler
# Define your database metadata
# -----------------------------
db = Gino()
# Define tables as you would normally do
# --------------------------------------
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer(), primary_key=True, autoincrement=True)
nickname = db.Column(db.Unicode(), nullable=False)
# Now just use your tables
# ------------------------
class AllUsers(GinoRequestHandler):
async def get(self):
users = await User.query.gino.all()
for user in users:
url = self.application.reverse_url('user', user.id)
nickname = tornado.escape.xhtml_escape(user.nickname)
self.write(f'<a href="{url}">{nickname}</a><br/>')
class GetUser(GinoRequestHandler):
async def get(self, uid):
user: User = await User.get_or_404(int(uid))
self.write(f'Hi, {user.nickname}!')
if __name__ == '__main__':
tornado.options.parse_command_line()
tornado.ioloop.IOLoop.configure('tornado.platform.asyncio.AsyncIOMainLoop')
app = Application([
tornado.web.URLSpec(r'/', AllUsers, name='index'),
tornado.web.URLSpec(r'/user/(?P<uid>[0-9]+)', GetUser, name='user')
], debug=True)
loop = tornado.ioloop.IOLoop.current().asyncio_loop
# If you intend to use HTTPServer in multiprocessed environment,
# call the app.late_init method after calling HTTPServer.start(n).
# This will create one connection pool per process.
loop.run_until_complete(app.late_init(db))
app.listen(8888)
loop.run_forever()
-
class
gino.ext.tornado.
Application
(handlers: List[Union[Rule, List[Any], Tuple[Union[str, Matcher], Any], Tuple[Union[str, Matcher], Any, Dict[str, Any]], Tuple[Union[str, Matcher], Any, Dict[str, Any], str]]] = None, default_host: str = None, transforms: List[Type[OutputTransform]] = None, **settings)[source]¶ Bases:
tornado.web.Application
Base application that provides access to the database object and defines a convenient method for initializing all the database-related stuff.
-
db
= None¶ The database object associated with this application. Use
late_init()
to init this or set it manually.
-
coroutine
late_init
(db: gino.ext.tornado.Gino, *, loop=None, options=<tornado.options.OptionParser object>)[source]¶ Initialize this application with a database object.
This method does a few things to setup application for working with the database:
- it enables task local storage;
- creates a connection pool and binds it to the passed database object;
- populates
db
.
Parameters: - db – the
gino.ext.tornado.Gino()
class instance that will be used in this application. - loop – io loop that will be used to run heep server, either tornado’s or asyncio’s.
- options – a tornado’s
OptionParser()
instance or any dictionary-like object with the database settings. Default is to usetornado.options.options
global.
-
use_connection_for_request
= True¶ If
True
, enablesGinoRequestHandler
to create lazy connections.See
use_connection_for_request
for more info.
-
-
class
gino.ext.tornado.
AsyncioRequestHandler
(application: tornado.web.Application, request: tornado.httputil.HTTPServerRequest, **kwargs)[source]¶ Bases:
tornado.web.RequestHandler
This class enables support for task locals by wrapping the
_execute()
method intoasyncio.Task
instances.
-
class
gino.ext.tornado.
Gino
(bind=None, model_classes=None, query_ext=True, schema_ext=True, ext=True, **kwargs)[source]¶ Bases:
gino.api.Gino
Base class for GINO database.
Using this class as a metadata for your database adds an additional
get_or_404()
method to all of your table classes.Parameters: - bind – A
GinoEngine
instance to bind. Also accepts string orURL
, which will be passed tocreate_engine()
when thisGino
instance is awaited. Default isNone
. - model_classes – A
tuple
of base class and mixin classes to create theModel
class. Default is(CRUDModel, )
. - query_ext – Boolean value to control the installation of the
gino
extension onExecutable
for implicit execution. Default is to install (True
). - schema_ext – Boolean value to control the installation of the
gino
extension onSchemaItem
for implicit execution. Default is to install (True
). - ext – Boolean value to control the installation of the two
gino
extensions.False
for no extension at all, while it depends on the two individual switches when this is set toTrue
(default). - kwargs – Other arguments accepted by
MetaData
.
-
model_base_classes
= (<class 'gino.crud.CRUDModel'>, <class 'gino.ext.tornado.TornadoModelMixin'>)¶
-
query_executor
¶ alias of
GinoExecutor
-
coroutine
set_bind
(bind, loop=None, **kwargs)[source]¶ Bind self to the given
GinoEngine
and return it.If the given
bind
is a string orURL
, all arguments will be sent tocreate_engine()
to create a new engine, and return it.Returns: GinoEngine
- bind – A
-
class
gino.ext.tornado.
GinoConnection
(dialect, sa_conn, stack=None)[source]¶ Bases:
gino.engine.GinoConnection
-
class
gino.ext.tornado.
GinoEngine
(dialect, pool, loop, logging_name=None, echo=None, execution_options=None)[source]¶ Bases:
gino.engine.GinoEngine
-
connection_cls
¶ alias of
GinoConnection
-
-
class
gino.ext.tornado.
GinoExecutor
(query)[source]¶ Bases:
gino.api.GinoExecutor
-
class
gino.ext.tornado.
GinoRequestHandler
(application: tornado.web.Application, request: tornado.httputil.HTTPServerRequest, **kwargs)[source]¶ Bases:
gino.ext.tornado.AsyncioRequestHandler
Base class for all request handlers that use GINO.
In addition to features provided by
AsyncioRequestHandler
, this class manages lazy connections for each request.-
db
¶ Access to the database object.
This property is equal to
Application.db
by default.
-
db_connection
¶ The actual connection associated with this request or
None
ifuse_connection_for_request
isFalse
.
-
coroutine
release_connection
()[source]¶ Return the connection associated with this request back to the pool.
-
use_connection_for_request
¶ If
True
, a lazy connection is created for each request.That is, whenever the first query occurs, a new connection is borrowed from the application’s db object. All succeeding queries made within this request will reuse that connection. The connection will be returned to the pool once the request is finished or the
release_connection()
method is called explicitly.This property is equal to
Application.use_connection_for_request
by default.
-
-
class
gino.ext.tornado.
TornadoStrategy
[source]¶ Bases:
gino.strategies.GinoStrategy
-
engine_cls
¶ alias of
GinoEngine
-
name
= 'tornado'¶
-
Module contents¶
Submodules¶
gino.api module¶
-
class
gino.api.
Gino
(bind=None, model_classes=None, query_ext=True, schema_ext=True, ext=True, **kwargs)[source]¶ Bases:
sqlalchemy.sql.schema.MetaData
All-in-one API class of GINO, providing several shortcuts.
This class is a subclass of SQLAlchemy
MetaData
, therefore its instances can be used as a normalMetaData
object, e.g. used in Alembic. In usual cases, you would want to define one globalGino
instance, usually under the name ofdb
, representing the database used in your application.You may define tables in the official way SQLAlchemy core recommended, but more often in GINO we define model classes with
db.Model
as their parent class to represent tables, for its objective interface and CRUD operations. Please read CRUD for more information.For convenience,
Gino
instance delegated all properties publicly exposed bysqlalchemy
, so that you can define tables / models without importingsqlalchemy
:id = db.Column(db.BigInteger(), primary_key=True)
Similar to
MetaData
, aGino
object can bind to aGinoEngine
instance, hereby allowing “implicit execution” through thegino
extension onExecutable
orSchemaItem
constructs:await User.query.gino.first() await db.gino.create_all()
Differently, GINO encourages the use of implicit execution and manages transactional context correctly.
Binding to a connection object is not supported.
To set a bind property, you can simply set your
GinoEngine
object ondb.bind
, or set it toNone
to unbind. However, the creation of engine usually happens at the same time. Therefore, GINO provided several convenient ways doing so:with_bind()
returning an asynchronous context manager:async with db.with_bind('postgresql://...') as engine:
set_bind()
andpop_bind()
:engine = await db.set_bind('postgresql://...') await db.pop_bind().close()
Directly
await
onGino
instance:db = await gino.Gino('postgresql://...') await db.pop_bind().close()
Note
SQLAlchemy allows creating the engine by:
metadata.bind = 'postgresql://...'
While in GINO this only sets a string to
bind
, because creating an engine requiresawait
, which is excatly whatset_bind()
does.At last,
Gino
delegated all query APIs on the boundGinoEngine
.Parameters: - bind – A
GinoEngine
instance to bind. Also accepts string orURL
, which will be passed tocreate_engine()
when thisGino
instance is awaited. Default isNone
. - model_classes – A
tuple
of base class and mixin classes to create theModel
class. Default is(CRUDModel, )
. - query_ext – Boolean value to control the installation of the
gino
extension onExecutable
for implicit execution. Default is to install (True
). - schema_ext – Boolean value to control the installation of the
gino
extension onSchemaItem
for implicit execution. Default is to install (True
). - ext – Boolean value to control the installation of the two
gino
extensions.False
for no extension at all, while it depends on the two individual switches when this is set toTrue
(default). - kwargs – Other arguments accepted by
MetaData
.
-
Model
¶ Declarative base class for models, subclass of
gino.declarative.Model
. Defining subclasses of this class will result new tables added to thisGino
metadata.
-
acquire
(*args, **kwargs)[source]¶ A delegate of
GinoEngine.acquire()
.
-
coroutine
all
(clause, *multiparams, **params)[source]¶ A delegate of
GinoEngine.all()
.
-
bind
¶ An
GinoEngine
to which thisGino
is bound.This is a simple property with no getter or setter hook - what you set is what you get. To achieve the same result as it is in SQLAlchemy - setting a string or
URL
and getting an engine instance, useset_bind()
(orawait
on thisGino
object after setting a string orURL
).
-
compile
(elem, *multiparams, **params)[source]¶ A delegate of
GinoEngine.compile()
.
-
coroutine
first
(clause, *multiparams, **params)[source]¶ A delegate of
GinoEngine.first()
.
-
iterate
(clause, *multiparams, **params)[source]¶ A delegate of
GinoEngine.iterate()
.
-
model_base_classes
= (<class 'gino.crud.CRUDModel'>,)¶ Overridable default model classes to build the
Model
.Default is
(CRUDModel, )
.
-
no_delegate
= {'create_engine', 'engine_from_config'}¶ A set of symbols from
sqlalchemy
which is not delegated byGino
.
-
pop_bind
()[source]¶ Unbind self, and return the bound engine.
This is usually used in a chained call to close the engine:
await db.pop_bind().close()
Returns: GinoEngine
orNone
if self is not bound.
-
query_executor
¶ The overridable
gino
extension class onExecutable
.This class will be set as the getter method of the property
gino
onExecutable
and its subclasses, ifext
andquery_ext
arguments are bothTrue
. Default isGinoExecutor
.alias of
GinoExecutor
-
coroutine
scalar
(clause, *multiparams, **params)[source]¶ A delegate of
GinoEngine.scalar()
.
-
schema_visitor
¶ alias of
gino.schema.GinoSchemaVisitor
-
coroutine
set_bind
(bind, loop=None, **kwargs)[source]¶ Bind self to the given
GinoEngine
and return it.If the given
bind
is a string orURL
, all arguments will be sent tocreate_engine()
to create a new engine, and return it.Returns: GinoEngine
-
coroutine
status
(clause, *multiparams, **params)[source]¶ A delegate of
GinoEngine.status()
.
-
transaction
(*args, **kwargs)[source]¶ A delegate of
GinoEngine.transaction()
.
-
with_bind
(bind, loop=None, **kwargs)[source]¶ Shortcut for
set_bind()
andpop_bind()
plus closing engine.This method accepts the same arguments of
create_engine()
. This allows inline creating an engine and binding self on enter, and unbinding self and closing the engine on exit:async with db.with_bind('postgresql://...') as engine: # play with engine
Returns: An asynchronous context manager.
-
class
gino.api.
GinoExecutor
(query)[source]¶ Bases:
object
The default
gino
extension onExecutable
constructs for implicit execution.Instances of this class are created when visiting the
gino
property ofExecutable
instances (also referred as queries or clause elements), for example:await User.query.gino.first()
This allows GINO to add the asynchronous query APIs (
all()
,first()
,scalar()
,status()
,iterate()
) to SQLAlchemy query clauses without messing up with existing synchronous ones. Calling these asynchronous query APIs has the same restriction - the relevant metadata (theGino
instance) must be bound to an engine, or anAttributeError
will be raised.Note
Executable clause elements that are completely irrelevant with any table - for example
db.select([db.text('now()')])
- has no metadata, hence no engine. Therefore, this will always fail:await db.select([db.text('now()')]).gino.scalar()
You should use
conn.scalar()
,engine.scalar()
or evendb.scalar()
in this case.-
coroutine
all
(*multiparams, **params)[source]¶ Returns
engine.all()
with this query as the first argument, and other arguments followed, whereengine
is theGinoEngine
to which the metadata (Gino
) is bound, while metadata is found in this query.
-
coroutine
first
(*multiparams, **params)[source]¶ Returns
engine.first()
with this query as the first argument, and other arguments followed, whereengine
is theGinoEngine
to which the metadata (Gino
) is bound, while metadata is found in this query.
-
iterate
(*multiparams, **params)[source]¶ Returns
engine.iterate()
with this query as the first argument, and other arguments followed, whereengine
is theGinoEngine
to which the metadata (Gino
) is bound, while metadata is found in this query.
-
load
(value)[source]¶ Shortcut to set execution option
loader
in a chaining call.For example to load
Book
instances with their authors:query = Book.join(User).select() books = await query.gino.load(Book.load(author=User)).all()
Read
execution_options()
for more information.
-
model
(model)[source]¶ Shortcut to set execution option
model
in a chaining call.Read
execution_options()
for more information.
-
query
¶ Get back the chained
Executable
.In a chained query calls, occasionally the previous query clause is needed after a
.gino.
chain, you can use.query.
to resume the chain back. For example:await User.query.gino.model(FOUser).query.where(...).gino.all()
-
return_model
(switch)[source]¶ Shortcut to set execution option
return_model
in a chaining call.Read
execution_options()
for more information.
-
coroutine
scalar
(*multiparams, **params)[source]¶ Returns
engine.scalar()
with this query as the first argument, and other arguments followed, whereengine
is theGinoEngine
to which the metadata (Gino
) is bound, while metadata is found in this query.
-
coroutine
status
(*multiparams, **params)[source]¶ Returns
engine.status()
with this query as the first argument, and other arguments followed, whereengine
is theGinoEngine
to which the metadata (Gino
) is bound, while metadata is found in this query.
-
timeout
(timeout)[source]¶ Shortcut to set execution option
timeout
in a chaining call.Read
execution_options()
for more information.
-
coroutine
gino.crud module¶
-
class
gino.crud.
Alias
(model, *args, **kwargs)[source]¶ Bases:
object
Experimental proxy for table alias on model.
-
class
gino.crud.
CRUDModel
(**values)[source]¶ Bases:
gino.declarative.Model
The base class for models with CRUD support.
Don’t inherit from this class directly, because it has no metadata. Use
db.Model
instead.-
append_where_primary_key
(q)[source]¶ Append where clause to locate this model instance by primary on the given query, and return the new query.
This is mostly used internally in GINO, but also available for such usage:
await user.append_where_primary_key(User.query).gino.first()
which is identical to:
await user.query.gino.first()
Deprecated since version 0.7.6: Use
lookup()
instead.
-
coroutine
create
(bind=None, timeout=<object object>, **values)¶ This
create
behaves a bit different on model classes compared to model instances.On model classes,
create
will create a new model instance and insert it into database. On model instances,create
will just insert the instance into the database.Under the hood
create()
usesINSERT ... RETURNING ...
to create the new model instance and load it with database default data if not specified.Some examples:
user1 = await User.create(name='fantix', age=32) user2 = User(name='gino', age=42) await user2.create()
Parameters: - bind – A
GinoEngine
to execute theINSERT
statement with, orNone
(default) to use the bound engine on the metadata (Gino
). - timeout – Seconds to wait for the database to finish executing,
None
for wait forever. By default it will use thetimeout
execution option value if unspecified. - values – Keyword arguments are pairs of attribute names and their initial values. Only available when called on a model class.
Returns: The instance of this model class (newly created or existing).
- bind – A
-
delete
¶ Similar to
update()
, thisdelete
is also different on model classes than on model instances.On model classes
delete
is an attribute of typeDelete
for massive deletes, for example:await User.delete.where(User.enabled.is_(False)).gino.status()
Similarly you can add a
returning()
clause to the query and it shall return the deleted rows as model objects.And on model instances,
delete()
is a method to remove the corresponding row in the database of this model instance. and returns the status returned from the database:print(await user.delete()) # e.g. prints DELETE 1
Note
delete()
only removes the row from database, it does not affect the current model instance.Parameters: - bind – An optional
GinoEngine
if current metadata (Gino
) has no bound engine, or specifying a differentGinoEngine
to execute theDELETE
. - timeout – Seconds to wait for the database to finish executing,
None
for wait forever. By default it will use thetimeout
execution option value if unspecified.
- bind – An optional
-
classmethod
distinct
(*columns)[source]¶ Experimental loader feature to yield only distinct instances by given columns.
-
coroutine
get
(ident, bind=None, timeout=<object object>)[source]¶ Get an instance of this model class by primary key.
For example:
user = await User.get(request.args.get('user_id'))
Parameters: - ident – Value of the primary key. For composite primary keys this should be a tuple of values for all keys in database order, or a dict of names (or position numbers in database order starting from zero) of all primary keys to their values.
- bind – A
GinoEngine
to execute theINSERT
statement with, orNone
(default) to use the bound engine on the metadata (Gino
). - timeout – Seconds to wait for the database to finish executing,
None
for wait forever. By default it will use thetimeout
execution option value if unspecified.
Returns: An instance of this model class, or
None
if no such row.
-
classmethod
load
(*column_names, **relationships)[source]¶ Populates a
loader.Loader
instance to be used by theloader
execution option in order to customize the loading behavior to load specified fields into instances of this model.The basic usage of this method is to provide the
loader
execution option (if you are looking for reloading the instance from database, checkget()
orquery
) for a given query.This method takes both positional arguments and keyword arguments with very different meanings. The positional arguments should be column names as strings, specifying only these columns should be loaded into the model instance (other values are discarded even if they are retrieved from database). Meanwhile, the keyword arguments should be loaders for instance attributes. For example:
u = await User.query.gino.load(User.load('id', 'name')).first()
Tip
gino.load
is a shortcut for setting the execution optionloader
.This will populate a
User
instance with onlyid
andname
values, all the rest are simplyNone
even if the query actually returned all the column values.q = User.join(Team).select() u = await q.gino.load(User.load(team=Team)).first()
This will load two instances of model
User
andTeam
, returning theUser
instance withu.team
set to theTeam
instance.Both positional and keyword arguments can be used ath the same time. If they are both omitted, like
Team.load()
, it is equivalent to justTeam
as a loader.Additionally, a
loader.Loader
instance can also be used to generate queries, as its structure is usually the same as the query:u = await User.load(team=Team).query.gino.first()
This generates a query like this:
SELECT users.xxx, ..., teams.xxx, ... FROM users LEFT JOIN teams ON ...
The
Loader
delegates attributes on thequery
, so.query
can be omitted. TheLEFT JOIN
is built-in behavior, while theON
clause is generated based on foreign key. If there is no foreign key, or the condition should be customized, you can use this:u = await User.load( team=Team.on(User.team_id == Team.id)).gino.first()
And you can use both
load()
andon()
at the same time in a chain, in whatever order suits you.See also
-
lookup
()[source]¶ Generate where-clause expression to locate this model instance.
By default this method uses current values of all primary keys, and you can override it to behave differently. Most instance-level CRUD operations depend on this method internally. Particularly while
lookup()
is called inupdate()
, the where condition is used inUpdateRequest.apply()
, so that queries likeUPDATE ... SET id = NEW WHERE id = OLD
could work correctly.Returns: New in version 0.7.6.
-
classmethod
on
(on_clause)[source]¶ Customize the on-clause for the auto-generated outer join query.
Note
This has no effect when provided as the
loader
execution option for a given query.See also
-
query
¶ Get a SQLAlchemy query clause of the table behind this model. This equals to
sqlalchemy.select([self.__table__])
. If this attribute is retrieved on a model instance, then a where clause to locate this instance by its primary key is appended to the returning query clause. This model type is set as the execution optionmodel
in the returning clause, so by default the query yields instances of this model instead of database rows.
-
select
()¶ Build a query to retrieve only specified columns from this table.
This method accepts positional string arguments as names of attributes to retrieve, and returns a
Select
for query. The returning query object is always set with two execution options:model
is set to this model typereturn_model
is set toFalse
So that by default it always return rows instead of model instances, while column types can be inferred correctly by the
model
option.For example:
async for row in User.select('id', 'name').gino.iterate(): print(row['id'], row['name'])
If
select()
is invoked on a model instance, then aWHERE
clause to locate this instance by its primary key is appended to the returning query clause. This is useful when you want to retrieve a latest value of a field on current model instance from database:db_age = await user.select('age').gino.scalar()
See also
-
to_dict
()[source]¶ Convenient method to generate a dict from this model instance.
Keys will be attribute names, while values are loaded from memory (not from database). If there are
JSONProperty
attributes in this model, their source JSON field will not be included in the returning dict - instead the JSON attributes will be.See also
-
update
¶ This
update
behaves quite different on model classes rather than model instances.On model classes,
update
is an attribute of typeUpdate
for massive updates, for example:await User.update.values(enabled=True).where(...).gino.status()
Like
query
, the update query also has themodel
execution option of this model, so if you use thereturning()
clause, the query shall return model objects.However on model instances,
update()
is a method which accepts keyword arguments only and returns anUpdateRequest
to update this single model instance. The keyword arguments are pairs of attribute names and new values. This is the same asUpdateRequest.update()
, feel free to read more about it. A normal usage example would be like this:await user.update(name='new name', age=32).apply()
Here, the
await ... apply()
executes the actualUPDATE
SQL in the database, whileuser.update()
only makes changes in the memory, and collect all changes into anUpdateRequest
instance.
-
-
class
gino.crud.
UpdateRequest
(instance)[source]¶ Bases:
object
A collection of attributes and their new values to update on one model instance.
UpdateRequest
instances are created byCRUDModel.update
, don’t instantiate manually unless required. EveryUpdateRequest
instance is bound to one model instance, all updates are for that one specific model instance and its database row.-
coroutine
apply
(bind=None, timeout=<object object>)[source]¶ Apply pending updates into database by executing an
UPDATE
SQL.Parameters: - bind – A
GinoEngine
to execute the SQL, orNone
(default) to use the bound engine in the metadata. - timeout – Seconds to wait for the database to finish executing,
None
for wait forever. By default it will use thetimeout
execution option value if unspecified.
Returns: self
for chaining calls.- bind – A
-
update
(**values)[source]¶ Set given attributes on the bound model instance, and add them into the update collections for
apply()
.Given keyword-only arguments are pairs of attribute names and values to update. This is not a coroutine, calling
update()
will have instant effect on the bound model instance - its in-memory values will be updated immediately. Therefore this can be used individually as a shortcut to update several attributes in a batch:user.update(age=32, disabled=True)
update()
returnsself
for chaining calls to eitherapply()
or anotherupdate()
. If one attribute is updated several times by the sameUpdateRequest
, then only the last value is remembered forapply()
.Updated values can be SQLAlchemy expressions, for example an atomic increment for user balance looks like this:
await user.update(balance=User.balance + 100).apply()
-
coroutine
gino.declarative module¶
-
gino.declarative.
declarative_base
(metadata, model_classes=(<class 'gino.declarative.Model'>, ), name='Model')[source]¶
-
gino.declarative.
declared_attr
(m)[source]¶ Mark a class-level method as a factory of attribute.
This is intended to be used as decorators on class-level methods of a
Model
class. When initializing the class as well as its subclasses, the decorated factory method will be called for each class, the returned result will be set on the class in place of the factory method under the same name.@declared_attr
is implemented differently thandeclared_attr
of SQLAlchemy, but they are both more often used on mixins to dynamically declare indices or constraints (also works for column and__table_args__
, or even normal class attributes):class TrackedMixin: created = db.Column(db.DateTime(timezone=True)) @db.declared_attr def unique_id(cls): return db.Column(db.Integer()) @db.declared_attr def unique_constraint(cls): return db.UniqueConstraint('unique_id') @db.declared_attr def poly(cls): if cls.__name__ == 'Thing': return db.Column(db.Unicode()) @db.declared_attr def __table_args__(cls): if cls.__name__ == 'Thing': return db.UniqueConstraint('poly'),
Note
This doesn’t work if the model already had a
__table__
.
gino.engine module¶
-
class
gino.engine.
GinoConnection
(dialect, sa_conn, stack=None)[source]¶ Bases:
object
Represents an actual database connection.
This is the root of all query API like
all()
,first()
,scalar()
orstatus()
, those on engine or query are simply wrappers of methods in this class.Usually instances of this class are created by
GinoEngine.acquire()
.Note
GinoConnection
may refer to zero or one underlying database connection - when aGinoConnection
is acquired withlazy=True
, the underlying connection may still be in the pool, until a query API is called orget_raw_connection()
is called.Oppositely, one underlying database connection can be shared by many
GinoConnection
instances when they are acquired withreuse=True
. The actual database connection is only returned to the pool when the rootGinoConnection
is released. Read more inGinoEngine.acquire()
method.See also
-
coroutine
all
(clause, *multiparams, **params)[source]¶ Runs the given query in database, returns all results as a list.
This method accepts the same parameters taken by SQLAlchemy
execute()
. You can pass in a raw SQL string, or any SQLAlchemy query clauses.If the given query clause is built by CRUD models, then the returning rows will be turned into relevant model objects (Only one type of model per query is supported for now, no relationship support yet). See
execution_options()
for more information.If the given parameters are parsed as “executemany” - bulk inserting multiple rows in one call for example, the returning result from database will be discarded and this method will return
None
.
-
execution_options
(**opt)[source]¶ Set non-SQL options for the connection which take effect during execution.
This method returns a copy of this
GinoConnection
which references the same underlying database connection, but with the given execution options set on the copy. Therefore, it is a good practice to discard the copy immediately after use, for example:row = await conn.execution_options(model=None).first(User.query)
This is very much the same as SQLAlchemy
execution_options()
, it actually does pass the execution options to the underlying SQLAlchemyConnection
. Furthermore, GINO added a few execution options:Parameters: - return_model – Boolean to control whether the returning results
should be loaded into model instances, where the model class is
defined in another execution option
model
. Default isTrue
. - model – Specifies the type of model instance to create on return.
This has no effect if
return_model
is set toFalse
. Usually in queries built by CRUD models, this execution option is automatically set. For now, GINO only supports loading each row into one type of model object, relationships are not supported. Please use multiple queries for that.None
for no postprocessing (default). - timeout – Seconds to wait for the query to finish.
None
for no time out (default). - loader –
A loader expression to load the database rows into specified objective structure. It can be either:
- A model class, so that the query will yield model instances of this class. It is your responsibility to make sure all the columns of this model is selected in the query.
- A
Column
instance, so that each result will be only a single value of this column. Please note, if you want to achieve fetching the very first value, you should usefirst()
instead ofscalar()
. However, using directlyscalar()
is a more direct way. - A tuple nesting more loader expressions recursively.
- A
callable()
function that will be called for each row to fully customize the result. Two positional arguments will be passed to the function: the first is therow
instance, the second is a context object which is only present if nested elseNone
. - A
Loader
instance directly. - Anything else will be treated as literal values thus returned as whatever they are.
- return_model – Boolean to control whether the returning results
should be loaded into model instances, where the model class is
defined in another execution option
-
coroutine
first
(clause, *multiparams, **params)[source]¶ Runs the given query in database, returns the first result.
If the query returns no result, this method will return
None
.See
all()
for common query comments.
-
coroutine
get_raw_connection
(*, timeout=None)[source]¶ Get the underlying database connection, acquire one if none present.
Parameters: timeout – Seconds to wait for the underlying acquiring Returns: Underlying database connection instance depending on the dialect in use Raises: TimeoutError
if the acquiring timed out
-
iterate
(clause, *multiparams, **params)[source]¶ Creates a server-side cursor in database for large query results.
Cursors must work within transactions:
async with conn.transaction(): async for user in conn.iterate(User.query): # handle each user without loading all users into memory
Alternatively, you can manually control how the cursor works:
async with conn.transaction(): cursor = await conn.iterate(User.query) user = await cursor.next() users = await cursor.many(10)
Read more about how
Cursor
works.Similarly, this method takes the same parameters as
all()
.
-
raw_connection
¶ The current underlying database connection instance, type depends on the dialect in use. May be
None
if self is a lazy connection.
-
coroutine
release
(*, permanent=True)[source]¶ Returns the underlying database connection to its pool.
If
permanent=False
, this connection will be set in lazy mode with underlying database connection returned, the next query on this connection will cause a new database connection acquired. This is useful when this connection may still be useful again later, while some long-running I/O operations are about to take place, which should not take up one database connection or even transaction for that long time.Otherwise with
permanent=True
(default), this connection will be marked as closed after returning to pool, and be no longer usable again.If this connection is a reusing connection, then only this connection is closed (depending on
permanent
), the reused underlying connection will not be returned back to the pool.Practically it is recommended to return connections in the reversed order as they are borrowed, but if this connection is a reused connection with still other opening connections reusing it, then on release the underlying connection will be returned to the pool, with all the reusing connections losing an available underlying connection. The availability of further operations on those reusing connections depends on the given
permanent
value.See also
-
coroutine
scalar
(clause, *multiparams, **params)[source]¶ Runs the given query in database, returns the first result.
If the query returns no result, this method will return
None
.See
all()
for common query comments.
-
schema_for_object
= <sqlalchemy.sql.schema._SchemaTranslateMap object>¶ A SQLAlchemy compatibility attribute, don’t use it for now, it bites.
-
coroutine
status
(clause, *multiparams, **params)[source]¶ Runs the given query in database, returns the query status.
The returning query status depends on underlying database and the dialect in use. For asyncpg it is a string, you can parse it like this: https://git.io/v7oze
-
transaction
(*args, **kwargs)[source]¶ Starts a database transaction.
There are two ways using this method: managed as an asynchronous context manager:
async with conn.transaction() as tx: # run query in transaction
or manually awaited:
tx = await conn.transaction() try: # run query in transaction await tx.commit() except Exception: await tx.rollback() raise
Where the
tx
is an instance of theGinoTransaction
class, feel free to read more about it.In the first managed mode, the transaction is automatically committed on exiting the context block, or rolled back if an exception was raised which led to the exit of the context. In the second manual mode, you’ll need to manually call the
commit()
orrollback()
methods on need.If this is a lazy connection, entering a transaction will cause a new database connection acquired if none was present.
Transactions may support nesting depending on the dialect in use. For example in asyncpg, starting a second transaction on the same connection will create a save point in the database.
For now, the parameters are directly passed to underlying database driver, read
asyncpg.connection.Connection.transaction()
for asyncpg.
-
coroutine
-
class
gino.engine.
GinoEngine
(dialect, pool, loop, logging_name=None, echo=None, execution_options=None)[source]¶ Bases:
object
Connects a
Pool
andDialect
together to provide a source of database connectivity and behavior.A
GinoEngine
object is instantiated publicly using thegino.create_engine()
function ordb.set_bind()
method.See also
-
acquire
(*, timeout=None, reuse=False, lazy=False, reusable=True)[source]¶ Acquire a connection from the pool.
There are two ways using this method - as an asynchronous context manager:
async with engine.acquire() as conn: # play with the connection
which will guarantee the connection is returned to the pool when leaving the
async with
block; or as a coroutine:conn = await engine.acquire() try: # play with the connection finally: await conn.release()
where the connection should be manually returned to the pool with
conn.release()
.Within the same context (usually the same
Task
, see also Transaction), a nesting acquire by default reParameters: - timeout – Block up to
timeout
seconds until there is one free connection in the pool. Default isNone
- block forever until succeeded. This has no effect whenlazy=True
, and depends on the actual situation whenreuse=True
. - reuse – Reuse the latest reusable acquired connection (before
it’s returned to the pool) in current context if there is one, or
borrow a new one if none present. Default is
False
for always borrow a new one. This is useful when you are in a nested method call series, wishing to use the same connection without passing it around as parameters. See also: Transaction. A reusing connection is not reusable even ifreusable=True
. If the reused connection happened to be a lazy one, then the reusing connection is lazy too. - lazy – Don’t acquire the actual underlying connection yet - do it
only when needed. Default is
False
for always do it immediately. This is useful before entering a code block which may or may not make use of a given connection object. Feeding in a lazy connection will save the borrow-return job if the connection is never used. If settingreuse=True
at the same time, then the reused connection - if any - applies the same laziness. For example, reusing a lazy connection withlazy=False
will cause the reused connection to acquire an underlying connection immediately. - reusable – Mark this connection as reusable or otherwise. This
has no effect if it is a reusing connection. All reusable connections
are placed in a stack, any reusing acquire operation will always
reuse the top (latest) reusable connection. One reusable connection
may be reused by several reusing connections - they all share one
same underlying connection. Acquiring a connection with
reusable=False
andreusing=False
makes it a cleanly isolated connection which is only referenced once here.
Returns: A
GinoConnection
object.- timeout – Block up to
-
coroutine
all
(clause, *multiparams, **params)[source]¶ Acquires a connection with
reuse=True
and runsall()
on it.reuse=True
means you can safely do this without borrowing more than one underlying connection:async with engine.acquire(): await engine.all('SELECT ...')
The same applies for other query methods.
-
compile
(clause, *multiparams, **params)[source]¶ A shortcut for
compile()
on the dialect, returns raw SQL string and parameters according to the rules of the dialect.
-
connection_cls
¶ Customizes the connection class to use, default is
GinoConnection
.alias of
GinoConnection
-
current_connection
¶ Gets the most recently acquired reusable connection in the context.
None
if there is no such connection.Returns: GinoConnection
-
iterate
(clause, *multiparams, **params)[source]¶ Creates a server-side cursor in database for large query results.
This requires that there is a reusable connection in the current context, and an active transaction is present. Then its
GinoConnection.iterate()
is executed and returned.
-
raw_pool
¶ Read-only access to the underlying database connection pool instance. This depends on the actual dialect in use,
Pool
of asyncpg for example.
-
transaction
(*args, timeout=None, reuse=True, reusable=True, **kwargs)[source]¶ Borrows a new connection and starts a transaction with it.
Different to
GinoConnection.transaction()
, transaction on engine level supports only managed usage:async with engine.transaction() as tx: # play with transaction here
Where the implicitly acquired connection is available as
tx.connection
.By default,
transaction()
acquires connection withreuse=True
andreusable=True
, that means it by default tries to create a nested transaction instead of a new transaction on a new connection. You can change the default behavior by setting these two arguments.The other arguments are the same as
transaction()
on connection.Returns: A asynchronous context manager that yields a GinoTransaction
-
update_execution_options
(**opt)[source]¶ Update the default execution_options dictionary of this
GinoEngine
.
-
gino.exceptions module¶
gino.json_support module¶
gino.loader module¶
-
class
gino.loader.
AliasLoader
(alias, *column_names, **extras)[source]¶ Bases:
gino.loader.ModelLoader
-
class
gino.loader.
CallableLoader
(func)[source]¶ Bases:
gino.loader.Loader
-
class
gino.loader.
ColumnLoader
(column)[source]¶ Bases:
gino.loader.Loader
-
class
gino.loader.
ModelLoader
(model, *column_names, **extras)[source]¶ Bases:
gino.loader.Loader
-
class
gino.loader.
TupleLoader
(values)[source]¶ Bases:
gino.loader.Loader
-
class
gino.loader.
ValueLoader
(value)[source]¶ Bases:
gino.loader.Loader
gino.schema module¶
-
class
gino.schema.
AsyncSchemaDropper
(dialect, connection, checkfirst=False, tables=None, **kwargs)[source]¶ Bases:
gino.schema.AsyncVisitor
,sqlalchemy.sql.ddl.SchemaDropper
-
class
gino.schema.
AsyncSchemaGenerator
(dialect, connection, checkfirst=False, tables=None, **kwargs)[source]¶ Bases:
gino.schema.AsyncVisitor
,sqlalchemy.sql.ddl.SchemaGenerator
gino.strategies module¶
gino.transaction module¶
-
class
gino.transaction.
GinoTransaction
(conn, args, kwargs)[source]¶ Bases:
object
Represents an underlying database transaction and its connection, offering methods to manage this transaction.
GinoTransaction
is supposed to be created by eithergino.engine.GinoConnection.transaction()
, orgino.engine.GinoEngine.transaction()
, orgino.api.Gino.transaction()
, shown as follows:async with db.transaction() as tx: ... async with engine.transaction() as tx: ... async with conn.transaction() as tx: ... tx = await conn.transaction() try: ... await tx.commit() except Exception: await tx.rollback() raise
When in use with asynchronous context manager,
GinoTransaction
will be in managed mode, while the last example withawait
will put theGinoTransaction
in manual mode where you have to call thecommit()
orrollback()
to manually close the transaction.In managed mode the transaction will be automatically committed or rolled back on exiting the
async with
block depending on whether there is an exception or not. Meanwhile, you can explicitly exit the transaction early byraise_commit()
orraise_rollback()
which will raise an internal exception managed by the asynchronous context manager and interpreted as a commit or rollback action. In a nested transaction situation, the two exit-early methods always close up the very transaction which the two methods are referenced upon - all children transactions are either committed or rolled back correspondingly, while no parent transaction was ever touched. For example:async with db.transaction() as tx1: async with db.transaction() as tx2: async with db.transaction() as tx3: tx2.raise_rollback() # Won't reach here # Won't reach here # Continues here with tx1, with both tx2 and tx3 rolled back. # For PostgreSQL, tx1 can still be committed successfully because # tx2 and tx3 are just SAVEPOINTs in transaction tx1
Tip
The internal exception raised from
raise_commit()
andraise_rollback()
is a subclass ofBaseException
, so normaltry ... except Exception:
can’t trap the commit or rollback.-
connection
¶ Accesses to the
GinoConnection
of this transaction. This is useful if when the transaction is started fromdb
orengine
where the connection is implicitly acquired for you together with the transaction.
-
raise_commit
()[source]¶ Only available in managed mode: skip rest of the code in this transaction and commit immediately by raising an internal exception, which will be caught and handled by the asynchronous context manager:
async with db.transaction() as tx: await user.update(age=64).apply() tx.raise_commit() await user.update(age=32).apply() # won't reach here assert user.age == 64 # no exception raised before
-
raise_rollback
()[source]¶ Only available in managed mode: skip rest of the code in this transaction and rollback immediately by raising an internal exception, which will be caught and handled by the asynchronous context manager:
assert user.age == 64 # assumption async with db.transaction() as tx: await user.update(age=32).apply() tx.raise_rollback() await user.update(age=128).apply() # won't reach here assert user.age == 64 # no exception raised before
-
raw_transaction
¶ Accesses to the underlying transaction object, whose type depends on the dialect in use.
-
Contributing¶
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
You can contribute in many ways:
Types of Contributions¶
Report Bugs¶
Report bugs at https://github.com/fantix/gino/issues.
If you are reporting a bug, please include:
- Your operating system name and version.
- Any details about your local setup that might be helpful in troubleshooting.
- Detailed steps to reproduce the bug.
Fix Bugs¶
Look through the GitHub issues for bugs. Anything tagged with “bug” and “help wanted” is open to whoever wants to implement it.
Implement Features¶
Look through the GitHub issues for features. Anything tagged with “enhancement” and “help wanted” is open to whoever wants to implement it.
Write Documentation¶
GINO could always use more documentation, whether as part of the official GINO docs, in docstrings, or even on the web in blog posts, articles, and such.
Submit Feedback¶
The best way to send feedback is to file an issue at https://github.com/fantix/gino/issues.
If you are proposing a feature:
- Explain in detail how it would work.
- Keep the scope as narrow as possible, to make it easier to implement.
- Remember that this is a volunteer-driven project, and that contributions are welcome :)
Get Started!¶
Ready to contribute? Here’s how to set up gino for local development.
Fork the gino repo on GitHub.
Clone your fork locally:
$ git clone git@github.com:your_name_here/gino.git
Create a branch for local development:
$ cd gino/ $ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
Create virtual environment. Example for virtualenvwrapper:
$ mkvirtualenv gino
Activate the environment and install requirements:
$ pip install -r requirements_dev.txt
When you’re done making changes, check that your changes pass syntax checks:
$ flake8 gino tests
7. And tests (including other Python versions with tox). For tests you you will need running database server (see “Tips” section below for configuration details):
$ pytest tests
$ tox
For docs run:
$ make docs
It will build and open up docs in your browser.
Commit your changes and push your branch to GitHub:
$ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature
Submit a pull request through the GitHub website.
Pull Request Guidelines¶
Before you submit a pull request, check that it meets these guidelines:
- The pull request should include tests.
- If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst.
- The pull request should work for Python 3.6. Check https://travis-ci.org/fantix/gino/pull_requests and make sure that the tests pass for all supported Python versions.
Tips¶
To run a subset of tests:
$ py.test -svx tests.test_gino
By default the tests run against a default installed postgres database. If you wish to run against a separate database for the tests you can do this by first creating a new database and user using ‘psql’ or similar:
CREATE ROLE gino WITH LOGIN ENCRYPTED PASSWORD 'gino';
CREATE DATABASE gino WITH OWNER = gino;
Then run the tests like so:
$ export DB_USER=gino DB_PASS=gino DB_NAME=gino
$ py.test
Here is an example for db server in docker. Terminal 1 (server):
$ docker run --rm -it -p 5433:5432 postgres:10
Terminal 2 (client):
$ psql -h localhost -p 5433 -U postgres
Now run create role/database commands described above.
Terminal 3 (python):
$ export DB_USER=gino DB_PASS=gino DB_NAME=gino DB_PORT=5433
$ pytest tests/test_aiohttp.py
Credits¶
Development Lead¶
- Fantix King <fantix.king@gmail.com>
Contributors¶
- Tony Wang <wwwjfy@gmail.com>
- Neal Wang <qdzzyb2015@gamil.com>
- Binghan Li <lbhsot@163.com>
- Vladimir Goncharov <dev.zelta@gmail.com>
- Kinware <oss@kinware.com>
- Kentoseth <kentoseth@devcroo.com>
- Ádám Barancsuk <adam.barancsuk@gmail.com>
- Sergey Kovalev <s_kovalev@wargaming.net>
- jonahfang
- Yurii Shtrikker <nbb.unitto@gmail.com>
- Nicolas Crocfer <ncrocfer@gmail.com>
- Denys Badzo <badzyo360@gmail.com>
- Pavol Vargovcik <pavol.vargovcik@gmail.com>
- Mykyta Holubakha <hilobakho@gmail.com>
Special thanks to my wife Daisy and her outsourcing company DecentFoX Studio, for offering me the opportunity to build this project. We are open for global software project outsourcing on Python, iOS and Android development.
History¶
GINO 0.7¶
This is also version 1.0 beta 3.
0.7.7 (2018-12-08)¶
- Backported fix for custom JSON/JSONB type support (#402 #403)
0.7.6 (2018-08-26)¶
- Updated library support (Contributed by Tony Wang in #275 #309)
- Added
none_as_none()
(#281 #282) - Added
ARRAY
alias in asyncpg dialect module (Contributed by Mykyta Holubakha in #289) - Added
Model.lookup()
to prevent updating whole table without primary key (#287 #288) - Added
DB_ECHO
in extension options (Contributed by Mykyta Holubakha in #298) - Fixed broken JSON/JSONB result processor since 0.5.8 (Contributed by Tony Wang in #291 #305)
- Removed bad rollback after a failing commit (Contributed by Tony Wang in #302 #304)
- Fixed to raise
UninitializedError
if bind isNone
(Contributed by Tony Wang in #307 #310)
0.7.5 (2018-07-26)¶
- Added friendly error message when using abstract models by mistake (#224)
- Supported Python 3.7 (Contributed by Tony Wang in #265)
- Updated documentation
- Fixed a bug in TupleLoader (Contributed by Pavol Vargovcik in #279 #280)
0.7.4 (2018-06-10)¶
- Added aiocontextvars as required dependency required for Python 3.5 and 3.6 (#228)
- Added Quart support (#213)
- Fixed Tornado options parsing (#231)
- Improve coding style and test coverage
0.7.3 (2018-05-19)¶
- Fix for failing binary type (#225)
0.7.2 (2018-05-15)¶
- Added prepared statement support (#14)
- Added dsn in extension config (Contributed by Yurii Shtrikker in #215)
0.7.1 (2018-05-03)¶
- Added support for inline model constraints (Contributed by Kinware in #198)
- Added docs and tests for using SSL (#202)
- Added
declared_attr
(#204) - Allowed
ModelLoader
passively load partial model (#216)
0.7.0 (2018-04-18)¶
- Added Python 3.5 support (#187)
- Added support to use
dict
as ident forModel.get
(#192) - Added result loader (partial relationship support) (#13)
- Added documentation on relationship and transaction (#146)
GINO 0.6¶
This is also version 1.0 beta 2.
Migrating to GINO 0.6¶
1. Task Local¶
We created a new Python package aiocontextvars from previous local.py
. If
you made use of the task local features, you should install this package.
Previous gino.enable_task_local()
and gino.disable_task_local()
are
replaced by aiocontextvars.enable_inherit()
and
aiocontextvars.disable_inherit()
. However in GINO 0.5 they controls the
whole task local feature switch, while aiocontextvars by default offers task
local even without enable_inherit()
, which controls whether the local
storage should be passed between chained tasks. When enabled, it behaves the
same as enabled in 0.5, but you cannot completely turn off the task local
feature while aiocontextvars is installed.
There is no gino.get_local()
and gino.reset_local()
relevant in
aiocontextvars. The similar thing is aiocontextvars.ContextVar
instance
through its get()
, set()
and delete()
methods.
Previous gino.is_local_root()
is now
not aiocontextvars.Context.current().inherited
.
2. Engine¶
GINO 0.6 hides asyncpg.Pool
behind the new SQLAlchemy-alike
gino.GinoEngine
. Instead of doing this in 0.5:
async with db.create_pool('postgresql://...') as pool:
# your code here
You should change it to this in 0.6:
async with db.with_bind('postgresql://...') as engine:
# your code here
This equals to:
engine = await gino.create_engine('postgresql://...')
db.bind = engine
try:
# your code here
finally:
db.bind = None
await engine.close()
Or:
engine = await db.set_bind('postgresql://...')
try:
# your code here
finally:
await db.pop_bind().close()
Or even this:
db = await gino.Gino('postgresql://...')
try:
# your code here
finally:
await db.pop_bind().close()
Choose whichever suits you the best.
Obviously GinoEngine
doesn’t provide asyncpg.Pool
methods directly any
longer, but you can get the underlying asyncpg.Pool
object through
engine.raw_pool
property.
GinoPool.get_current_connection()
is now changed to current_connection
property on GinoEngine
instances to support multiple engines.
GinoPool.execution_option
is gone, instead update_execution_options()
on GinoEngine
instance is available.
GinoPool().metadata
is gone, dialect
is still available.
GinoPool.release()
is removed in GinoEngine
and Gino
, the
release()
method on GinoConnection
object should be used instead.
These methods exist both in 0.5 GinoPool
and 0.6 GinoEngine
:
close()
, acquire()
, all()
, first()
, scalar()
, status()
.
3. GinoConnection¶
Similarly, GinoConnection
in 0.6 is no longer a subclass of
asyncpg.Connection
, instead it has a asyncpg.Connection
instance,
accessable through GinoConnection.raw_connection
property.
GinoConnection.metadata
is deleted in 0.6, while dialect
remained.
GinoConnection.execution_options()
is changed from a mutable dict in 0.5 to
a method returning a copy of current connection with the new options, the same
as SQLAlchemy behavior.
GinoConnection.release()
is still present, but its default behavior has
been changed to permanently release this connection. You should add argument
permanent=False
to remain its previous behavior.
And all()
, first()
, scalar()
, status()
, iterate()
,
transaction()
remained in 0.6.
4. Query API¶
All five query APIs all()
, first()
, scalar()
, status()
,
iterate()
now accept the same parameters as SQLAlchemy execute()
,
meaning they accept raw SQL text, or multiple sets of parameters for
“executemany”. Please note, if the parameters are recognized as “executemany”,
none of the methods will return anything. Meanwhile, they no longer accept the
parameter bind
if they did. Just use the API on the GinoEngine
or
GinoConnection
object instead.
5. Transaction¶
Transaction interface is rewritten. Now in 0.6, a GinoTransaction
object is
provided consistently from all 3 methods:
async with db.transaction() as tx:
# within transaction
async with engine.transaction() as tx:
# within transaction
async with engine.acquire() as conn:
async with conn.transaction() as tx:
# within transaction
And different usage with await
:
tx = await db.transaction()
try:
# within transaction
await tx.commit()
except:
await tx.rollback()
raise
The GinoConnection
object is available at tx.connection
, while
underlying transaction object from database driver is available at
tx.transaction
- for asyncpg it is an asyncpg.transaction.Transaction
object.
0.6.6 (2018-05-18)¶
- Backported a fix for failing binary type (#225)
0.6.5 (2018-04-18)¶
- Abandoned 0.6.4 and keep 0.6.x stable
- Backported doc for transaction
0.6.4 (2018-04-16)¶
Abandoned version, please use 0.7.0 instead.
0.6.3 (2018-04-08)¶
- Added aiohttp support
- Added support for calling
create()
on model instances (Contributed by Kinware in #178 #180) - Fixed
get()
by string, and misc environment issues (Contributed by Tony Wang in #191 193 #183 #184)
0.6.2 (2018-03-24)¶
- Fixed SQLAlchemy prefetch issue (#141)
- Fixed issue that mixin class on Model not working (#174)
- Added more documentation (Thanks Olaf Conradi for reviewing)
0.6.1 (2018-03-18)¶
- Fixed
create
anddrop
forEnum
type (#160) - A bit more documentation (#159)
0.6.0 (2018-03-14)¶
- [Breaking] API Refactored,
Pool
replaced withEngine
- New API
Engine
replaced asyncpgPool
(#59) - Supported different dialects, theoretically
- Used aiocontextvars instead of builtin task local (#89)
- New API
- [Breaking] Fixed query API with
multiparams
(executemany) to return correctly (#20) - [Breaking] The query methods no longer accept the parameter
bind
- [Breaking]
Gino
no longer exposespostgresql
types - Added
echo
on engine (#142) - Added tests to cover 80% of code
- Added
gino
extension onSchemaItem
forcreate_all
and so on (#76 #106) - Added
gino
extension on model classes forcreate()
ordrop()
- Added
_update_request_cls
onCRUDModel
(#147) - Rewrote the documentation (#146)
GINO 0.5¶
This is also version 1.0 beta 1.
0.5.8 (2018-02-14)¶
- Preparing for 0.6.0 which will be a breaking release
- Fixed wrong value of
Enum
in creation (Contributed by Sergey Kovalev in #126)
0.5.7 (2017-11-24)¶
This is an emergency fix for 0.5.6.
- Fixed broken lazy connection (Contributed by Ádám Barancsuk in #114)
- Added
Model.outerjoin
0.5.6 (2017-11-23)¶
- Changed to use unnamed statement when possible (#80 #90)
- Added more example (Contributed by Kentoseth in #109)
- Added
Model.join
and madeModel
selectable (Contributed by Ádám Barancsuk in #112 #113)
0.5.5 (2017-10-18)¶
- Ensured clean connection if transaction acquire fails (Contributed by Vladimir Goncharov in #87)
- Added ability to reset local storage (#84)
- Fixed bug in JSON property update
- Added update chaining feature
0.5.4 (2017-10-04)¶
- Updated example (Contributed by Kinware in #75)
- Added
Model.insert
(Contributed by Neal Wang in #63) - Fixed issue that non-lazy acquiring fails dirty (#79)
0.5.3 (2017-09-23)¶
- Fixed
no module named cutils
error (Contributed by Vladimir Goncharov in #73)
0.5.2 (2017-09-10)¶
- Added missing driver name on dialect (#67)
- Fixed dialect to support native decimal type (#67)
0.5.1 (2017-09-09)¶
This is an emergency fix for 0.5.0.
- Reverted the extension, back to pure Python (#60)
- Used SQLAlchemy
RowProxy
- Added
first_or_404
- Fixed bug that
GinoPool
cannot be inherited
0.5.0 (2017-09-03)¶
- [Breaking] Internal refactor: extracted and isolated a few modules, partially rewritten
- Extracted CRUD operations
- Core operations are moved to
dialect
and execution context - Removed
guess_model
, switched to explicit execution options - Turned
timeout
parameter to an execution option - Extracted
pool
,connection
andapi
fromasyncpg_delegate
- Added support for SQLAlchemy execution options, and a few custom options
- [Breaking] Made Model.select return rows by default (#39)
- Moved get_or_404 to extensions (#38)
- Added iterator on model classes (#43)
- Added Tornado extension (Contributed by Vladimir Goncharov)
- Added Model.to_dict (#47)
- Added an extension module to update asyncpg.Record with processed results
Early Development Releases¶
Considered as alpha releases.
0.4.1 (2017-08-20)¶
- Support
select
on model instance
0.4.0 (2017-08-15)¶
- Made
get_or_404
more friendly when Sanic is missing (Contributed by Neal Wang in #23 #31) - Delegated
sqlalchemy.__all__
(Contributed by Neal Wang in #10 #33) - [Breaking] Rewrote JSON/JSONB support (#29)
- Added
lazy
parameter ondb.acquire
(Contributed by Binghan Li in #32) - Added Sanic integration (Contributed by Binghan Li, Tony Wang in #30 #32 #34)
- Fixed
iterate
API to be compatible with asyncpg (#32) - Unified exceptions
- [Breaking] Changed
update
API (#29) - Bug fixes
0.3.0 (2017-08-07)¶
- Supported
__table_args__
(#12) - Introduced task local to manage connection in context (#19)
- Added
query.gino
extension for in-place execution - Refreshed README (#3)
- Adopted PEP 487 (Contributed by Tony Wang in #17 #27)
- Used
weakref
on__model__
of table and query (Contributed by Tony Wang) - Delegated asyncpg
timeout
parameter (Contributed by Neal Wang in #16 #22)
0.2.3 (2017-08-04)¶
- Supported any primary key (Contributed by Tony Wang in #11)
0.2.2 (2017-08-02)¶
- Supported SQLAlchemy result processor
- Added rich support on JSON/JSONB
- Bug fixes
0.2.1 (2017-07-28)¶
- Added
update
anddelete
API
0.2.0 (2017-07-28)¶
- Changed API, no longer reuses asyncpg API
0.1.1 (2017-07-25)¶
- Added
db.bind
- API changed: parameter
conn
renamed to optionalbind
- Delegated asyncpg Pool with
db.create_pool
- Internal enhancement and bug fixes
0.1.0 (2017-07-21)¶
- First release on PyPI.