欢迎¶
GINO 递归定义为 GINO Is Not ORM,是一个基于 asyncio 和 SQLAlchemy core 的轻量级异步 Python ORM 框架,目前(2018 年初)仅支持 asyncpg 一种引擎。
自由软件:BSD 许可
依赖关系:Python 3.5
本文档尚未编写完毕,未完之处敬请见谅。
目录¶
上手指南¶
这是一篇写给刚入坑同学的指南,将介绍 GINO 的基本部分。阅读之前,请先了解以下知识点:
关系型数据库,尤其是 PostgreSQL
Python 异步编程
您不需要对 SQLAlchemy 有所了解。
介绍¶
简单来说,GINO 可以在您的异步应用中帮助您完成 SQL 语句的生成及执行,您只需要通过友好的对象化 API 来操作您的数据即可,无需亲自编写 SQL 与数据库交互。
因为异步编程并不会使您的程序变快——如果说不拖累的话——而且还会增加复杂度和风险,所以也许您并不需要 GINO 或者说是异步数据库连接。跳坑之前请先阅读 Why Asynchronous ORM?。
安装¶
稳定版¶
请在终端中执行以下命令以安装 GINO:
$ pip install gino
以上就是安装 GINO 的推荐方式,因为这种方式始终会去安装最新的稳定版。
如果您还没有安装过 pip,您可以参阅 Python installation guide。
从源码安装¶
GINO 的源码可以从 Github repo 下载。
您可以 git clone:
$ git clone git://github.com/fantix/gino
或者下载 tarball:
$ curl -OL https://github.com/fantix/gino/tarball/master
拿到源码后,您可以这样安装:
$ python setup.py install
声明模型¶
开始之前,我们需要先创建一个 Gino
的全局实例,通常叫做 db
:
from gino import Gino
db = Gino()
db
可以被当做是数据库的一个代表,后续大部分的数据库交互都将通过它来完成。
“Model” 是 GINO 中的一个基本概念,它表示继承自 db.Model
的用户定义类。每个 Model
的子类代表了数据库中的一张表,而这些类的对象则代表了对应表中的一行数据。如果您曾经使用过其它 ORM 产品,对这种映射关系应该不感到陌生。现在我们尝试定义一个 model:
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer(), primary_key=True)
nickname = db.Column(db.Unicode(), default='noname')
By declaring this User
class, we are actually defining a database table
named users
, with two columns id
and nickname
. Note that the fixed
__tablename__
property is required. GINO
suggests singular for model names, and plural for table names. Each
db.Column
property defines one column for
the table, where its first parameter indicates the column type in database,
while the rest is for other column attributes or constraints. You can find a
mapping of database types to db
types here in the SQLAlchemy
documentation.
注解
SQLAlchemy 是 Python 中一个强大的非异步 ORM 库,而 GINO 就是基于其构建的。通过不同的 SQL 方言实现,SQLAlchemy 支持包括 PostgreSQL 和 MySQL 在内的许多流行的 RDBMS,以至于有时相同的 Python 代码可以不经修改地运行在不同的数据库上。GINO 自然也承袭了这一特性,但目前暂仅支持 PostgreSQL(通过 asyncpg)。
如果需要定义涵盖多个列的数据库约束或索引,您仍然可以通过 model 类属性的方式来定义,属性名称虽未被用到,但不能重复。例如:
class Booking(db.Model):
__tablename__ = 'bookings'
day = db.Column(db.Date)
booker = db.Column(db.String)
room = db.Column(db.String)
_pk = db.PrimaryKeyConstraint('day', 'booker', name='bookings_pkey')
_idx1 = db.Index('bookings_idx_day_room', 'day', 'room', unique=True)
_idx2 = db.Index('bookings_idx_booker_room', 'booker', 'room')
It is also possible to define model constraints and indexes outside the model class if that is preferred. For more details on constraints and indexes, see here in the SQLAlchemy documentation.
由于一些限制,目前不允许在父类中直接使用类属性的方式来单独定义数据库约束和索引,__table_args__
也是一样的。GINO 提供了 declared_attr()
来实现比如 mixin 类这样的功能,更多信息请参阅其 API 文档。
建立连接¶
前面的声明只是定义了映射关系,并非实际在数据库中创建了这些表结构。为了使用 GINO 来创建表,我们需要先与数据库建立连接。这里我们先为本指南创建一个 PostgreSQL 的数据库实例:
$ createdb gino
然后,告诉我们的 db
对象去连接这个数据库:
import asyncio
async def main():
await db.set_bind('postgresql://localhost/gino')
asyncio.get_event_loop().run_until_complete(main())
If this runs successfully, then you are connected to the newly created database.
Here postgresql
indicates the database dialect to use (the default driver
is asyncpg
, you can explicitly specify that with postgresql+asyncpg://
,
or simply asyncpg://
), localhost
is where the server is, and gino
is the name of the database. Check here for more
information about how to compose this database URL.
注解
在底层,set_bind()
调用了 create_engine()
来创建 engine,并将其绑定到 db
对象上。GINO engine 与 SQLAlchemy engine 类似,但 GINO engine 是异步的,而后者是阻塞式的。关于如何使用 engine,请参考 GINO 的 API 文档。
建立连接之后,我们就可以用 GINO 在数据库中创建我们的表了(在同一个 main()
函数里):
await db.gino.create_all()
警告
这里是 db.gino.create_all
,而不是 db.create_all
,因为 db
继承自 SQLAlchemy 的 MetaData
,而 db.create_all
是 SQLAlchemy 的阻塞式方法,无法适用于绑定的 GINO engine。
实际应用中,create_all()
通常不是最理想的解决方案,我们推荐使用诸如 Alembic 这样的工具来管理数据库表结构。
如果您想显式地断开与数据库的连接,您可以这么做:
await db.pop_bind().close()
继续之前,让我们重新看一下前面所有的代码:
import asyncio
from gino import Gino
db = Gino()
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer(), primary_key=True)
nickname = db.Column(db.Unicode(), default='noname')
async def main():
await db.set_bind('postgresql://localhost/gino')
await db.gino.create_all()
# further code goes here
await db.pop_bind().close()
asyncio.get_event_loop().run_until_complete(main())
增删改查¶
为了操作数据库中的数据,GINO 提供了基本的基于对象的增删改查功能。
增¶
让我们从创建一个 User
对象开始:
user = await User.create(nickname='fantix')
# This will cause GINO to execute this SQL with parameter 'fantix':
# INSERT INTO users (nickname) VALUES ($1) RETURNING users.id, users.nickname
正如之前所说,user
对象代表了数据库中新插入的这一行数据。您可以通过 user
对象上的之前定义的列属性来访问每一列的值:
print(f'ID: {user.id}') # 1
print(f'Nickname: {user.nickname}') # fantix
另外,您也可以先在内存中创建一个 user
对象,然后再将其插入到数据库中:
user = User(nickname='fantix')
user.nickname += ' (founder)'
await user.create()
查¶
想要通过主键来获取一个 model 对象,您可以使用 model 的类方法 get()
。比如,重新获取刚才插入的同一行数据:
user = await User.get(1)
# SQL (parameter: 1):
# SELECT users.id, users.nickname FROM users WHERE users.id = $1
常规的 SQL 查询则是通过类属性 query
来完成。比如,获取数据库中所有的 User
对象的列表:
all_users = await db.all(User.query)
# SQL:
# SELECT users.id, users.nickname FROM users
或者,您也可以使用 query
的 gino
扩展。比如,下面的代码可以实现一样的效果:
all_users = await User.query.gino.all()
# SQL:
# SELECT users.id, users.nickname FROM users
注解
实际上,User.query
是一个普通的 SQLAlchemy 查询对象,SQLAlchemy 的阻塞式执行方法依然存在其上,因此 GINO 向所有 SQLAlchemy 的“Executable”对象注入了一个 gino
扩展,以便在不影响 SQLAlchemy 原有 API 的基础上,让直接异步地执行这些查询对象更容易,而不用每次都通过 engine 或 db
对象来执行。
现在让我们尝试增加一些过滤器。比如,查找出所有 ID 小于 10 的用户:
founding_users = await User.query.where(User.id < 10).gino.all()
# SQL (parameter: 10):
# SELECT users.id, users.nickname FROM users WHERE users.id < $1
Read more here about writing queries, because the query object is exactly from SQLAlchemy core.
警告
当您拿到一个 model 对象时,这个对象就已经彻底与数据库分离了,完全成为内存中的一个普通对象。这就意味着,即使数据库中对应的行发生了变化,对象的值仍然不会受到丝毫影响。类似地,如果您修改了该对象的值,数据库也不会受到任何影响。
并且,GINO 也不会追踪 model 对象,因此重复查询同一行数据将会得到两个独立的、拥有相同值的对象,修改其中一个的值不会幽灵般地影响到另一个的值。
不同于传统 ORM 的 model 对象通常是有状态的,GINO 的 model 对象则更像是用对象封装的 SQL 查询结果,这是 GINO 为了适应异步编程而特意设计的简易性,也是“GINO 不是 ORM”名字的来源。
有时我们仅需要获取一个对象,比如验证登录时,使用用户名来查找一个用户。这时,可以使用这种便捷的写法:
user = await User.query.where(User.nickname == 'fantix').gino.first()
# SQL (parameter: 'fantix'):
# SELECT users.id, users.nickname FROM users WHERE users.nickname = $1
如果数据库中没有叫“fantix”的用户,则 user
会被置为 None
。
又有时,我们会需要获取一个单独的值,比如 ID 为 1 的用户的名字。此时可以使用 model 的类方法 select()
:
name = await User.select('nickname').where(User.id == 1).gino.scalar()
# SQL (parameter: 1):
# SELECT users.nickname FROM users WHERE users.id = $1
print(name) # fantix
又比如,查询用户数量:
population = await db.func.count(User.id).gino.scalar()
# SQL:
# SELECT count(users.id) AS count_1 FROM users
print(population) # 17 for example
改¶
接下来,让我们尝试对数据做一些修改,下面的例子会穿插一些前面用过的查询操作。
# create a new user
user = await User.create(nickname='fantix')
# get its name
name = await User.select('nickname').where(
User.id == user.id).gino.scalar()
assert name == user.nickname # they are both 'fantix' before the update
# modification here
await user.update(nickname='daisy').apply()
# SQL (parameters: 'daisy', 1):
# UPDATE users SET nickname=$1 WHERE users.id = $2 RETURNING users.nickname
print(user.nickname) # daisy
# get its name again
name = await User.select('nickname').where(
User.id == user.id).gino.scalar()
print(name) # daisy
assert name == user.nickname # they are both 'daisy' after the update
这里的 update()
是我们碰到的第一个 model 实例上的 GINO 方法,它接受多个自定义命名参数,参数名对应着 model 的字段名,而参数值则为期望修改成的新的值。连着写的 apply()
则会将这一修改同步到数据库中。
注解
GINO 显式地将“修改内存中对象的值”与“修改数据库中的行”拆分成了两个方法: update()
与 apply()
。update()
负责修改内存中的值,并且将改动记录在返回的 UpdateRequest
对象中;紧接着调用的 UpdateRequest
对象的 apply()
方法则会将这些记录下的改动通过 SQL 更新到数据库中。
小技巧
UpdateRequest
对象还有一个方法也叫 update()
,它与 model 对象上的 update()
方法的功能是一样的,只不过前者还会将新的改动记录与当前 UpdateRequest
已记录的改动合并在一起,并且返回同一个 UpdateRequest
对象。这意味着,您可以连着写多个 update()
调用,最后用一个 apply()
结尾,或者仅仅是通过 UpdateRequest
对象来完成内存对象的多次改动。
Model 对象上的 update()
方法只能操作该对象对应的数据库中的一行数据,而如果您想要批量更新多行数据的话,您可以使用 model 类上的 update()
类方法。用法略有不同:
await User.update.values(nickname='Founding Member ' + User.nickname).where(
User.id < 10).gino.status()
# SQL (parameter: 'Founding Member ', 10):
# UPDATE users SET nickname=($1 || users.nickname) WHERE users.id < $2
name = await User.select('nickname').where(
User.id == 1).gino.scalar()
print(name) # Founding Member fantix
这里不再有 UpdateRequest
了,所有的操作又回到了普通的 SQLAlchemy 用法,更多细节可以参考 SQLAlchemy 的文档。
删¶
最后,删除一行数据与更新一行数据有些类似,但要简单很多:
user = await User.create(nickname='fantix')
await user.delete()
# SQL (parameter: 1):
# DELETE FROM users WHERE users.id = $1
print(await User.get(user.id)) # None
提示
还记得内存对象的事情吗?在最后一行的 print()
中,尽管数据库中已经没有这一行数据了,但是 user
对象依然在内存中,它的值也都没有变化,所以这里仍然可以用 user.id
。
或者批量删除(千万不要忘记写 where
!是不是整个表都不想要了?):
await User.delete.where(User.id > 10).gino.status()
# SQL (parameter: 10):
# DELETE FROM users WHERE users.id > $1
有了基本的 CRUD,您应该已经可以用 GINO 做出一些不可思议的东西来了。这篇上手指南到此结束,要了解更多请继续阅读文档的剩余部分。祝编程愉快!
Schema Declaration¶
There are 3 ways to declare your database schema to be used with GINO. Because
GINO is built on top of SQLAlchemy core, either way you are actually declaring
SQLAlchemy Table
.
GINO Engine¶
This is the minimized way to use GINO - using only
GinoEngine
(and GinoConnection
too), everything else are vanilla SQLAlchemy core. This is useful when you have
legacy code written in SQLAlchemy core, in need of porting to asyncio. For new
code please use the other two.
For example, the table declaration is the same as SQLAlchemy core tutorial:
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
metadata = MetaData()
users = Table(
'users', metadata,
Column('id', Integer, primary_key=True),
Column('name', String),
Column('fullname', String),
)
addresses = Table(
'addresses', metadata,
Column('id', Integer, primary_key=True),
Column('user_id', None, ForeignKey('users.id')),
Column('email_address', String, nullable=False)
)
注解
When using GINO Engine only, it is usually your own business to create the
tables with either create_all()
on a
normal non-async SQLAlchemy engine, or using Alembic. However it is still
possible to be done with GINO if it had to:
import gino
from gino.schema import GinoSchemaVisitor
async def main():
engine = await gino.create_engine('postgresql://...')
await GinoSchemaVisitor(metadata).create_all(engine)
Then, construct queries, in SQLAlchemy core too:
ins = users.insert().values(name='jack', fullname='Jack Jones')
So far, everything is still in SQLAlchemy. Now let's get connected and execute the insert:
async def main():
engine = await gino.create_engine('postgresql://localhost/gino')
conn = await engine.acquire()
await conn.status(ins)
print(await conn.all(users.select()))
# Outputs: [(1, 'jack', 'Jack Jones')]
Here create_engine()
creates a GinoEngine
,
then acquire()
checks out a
GinoConnection
, and
status()
executes the insert and returns the
status text. This works similarly as SQLAlchemy
execute()
- they take the same parameters
but return a bit differently. There are also other similar query APIs:
all()
returns a list ofRowProxy
first()
returns oneRowProxy
, orNone
one()
returns oneRowProxy
one_or_none()
returns oneRowProxy
, orNone
scalar()
returns a single value, orNone
iterate()
returns an asynchronous iterator which yieldsRowProxy
Please go to their API for more information.
GINO Core¶
In previous scenario, GinoEngine
must not be set to
metadata.bind
because it is not a
regular SQLAlchemy Engine thus it won't work correctly. For this, GINO provides
a subclass of MetaData
as Gino
,
usually instantiated globally under the name of db
. It can be used as a
normal MetaData
still offering some conveniences:
It delegates most public types you can access on
sqlalchemy
It works with both normal SQLAlchemy engine and asynchronous GINO engine
It exposes all query APIs on
GinoConnection
levelIt injects two
gino
extensions on SQLAlchemy query clauses and schema items, allowing short inline execution likeusers.select().gino.all()
It is also the entry for the third scenario, see later
Then we can achieve previous scenario with less code like this:
from gino import Gino
db = Gino()
users = db.Table(
'users', db,
db.Column('id', db.Integer, primary_key=True),
db.Column('name', db.String),
db.Column('fullname', db.String),
)
addresses = db.Table(
'addresses', db,
db.Column('id', db.Integer, primary_key=True),
db.Column('user_id', None, db.ForeignKey('users.id')),
db.Column('email_address', db.String, nullable=False)
)
async def main():
async with db.with_bind('postgresql://localhost/gino'):
await db.gino.create_all()
await users.insert().values(
name='jack',
fullname='Jack Jones',
).gino.status()
print(await users.select().gino.all())
# Outputs: [(1, 'jack', 'Jack Jones')]
Similar to SQLAlchemy core and ORM, this is GINO core. All tables and queries
are still made of SQLAlchemy whose rules still apply, but sqlalchemy
seems
never imported. This is useful when ORM is unwanted.
小技巧
asyncpgsa does the same thing, but in a conceptually reversed way - instead of having asyncpg work for SQLAlchemy, it made SQLAlchemy work for asyncpg (GINO used to be in that way too because GINO is inspired by asyncpgsa). Either way works fine, it's just a matter of taste of whose API style to use, SQLAlchemy or asyncpg.
GINO ORM¶
If you want to further reduce the length of code, and taking a bit risk of implicity, welcome to the ORM world. Even though GINO made itself not quite a traditional ORM by being simple and explict to safely work with asyncio, common ORM concepts are still valid - a table is a model class, a row is a model instance. Still the same example rewritten in GINO ORM:
from gino import Gino
db = Gino()
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String)
fullname = db.Column(db.String)
class Address(db.Model):
__tablename__ = 'addresses'
id = db.Column(db.Integer, primary_key=True)
user_id = db.Column(None, db.ForeignKey('users.id'))
email_address = db.Column(db.String, nullable=False)
async def main():
async with db.with_bind('postgresql://localhost/gino'):
await db.gino.create_all()
await User.create(name='jack', fullname='Jack Jones')
print(await User.query.gino.all())
# Outputs: [<User object at 0x10a8ba860>]
重要
The __tablename__
is a mandatory field to define a concrete model.
As you can see, the declaration is pretty much the same as before. Underlying
they are identical, declaring two tables in db
. The class
style is just
more declarative. Instead of users.c.name
, you can now access the column by
User.name
. The implicitly created Table
is
available at User.__table__
and Address.__table__
. You can use anything
that works in GINO core here.
注解
Column names can be different as a class property and database column.
For example, name can be declared as
nickname = db.Column('name', db.Unicode(), default='noname')
. In this
example, User.nickname
is used to access the column, while in database,
the column name is name
.
What's worth mentioning is where raw SQL statements are used, or
TableClause
is involved, like User.insert()
, the original name is
required to be used, because in this case, GINO has no knowledge about the
mappings.
小技巧
db.Model
is a dynamically created parent class for your models. It is
associated with the db
on initialization, therefore the table is put in
the very db
when you declare your model class.
Things become different when it comes to CRUD. You can use model level methods
to directly create()
a model instance, instead of
inserting a new row. Or delete()
a model instance
without needing to specify the where clause manually. Query returns model
instances instead of RowProxy
, and row values are
directly available as attributes on model instances. See also: CRUD.
After all, GinoEngine
is always in use. Next let's dig
more into it.
Engine and Connection¶
GinoEngine
is the core of GINO. It acts like a pool of
connections but also does the work of assembling everyone together:

Under the hood, engine is associated with a specific dialect instance on creation, e.g. asyncpg dialect. The dialect is actually a set of classes that implements GINO dialect API, offering all the details about how to operate on this specific database. In the diagram, gray color means internal, while green means touchable by end users.
During creation, the engine will also ask the dialect to create a database connection pool for it. The pool type is also a part of the dialect API, because asynchronous database drivers usually have their own pool implementation, thus their GINO dialects should hide such implementation differences behind the unified diagram API for engine to use.
注解
In SQLAlchemy, database drivers are supposed to follow the DB-API standard, which does not usually provide a pool implementation. Therefore, SQLAlchemy has its own pool implementation, created directly in engine. This is where this diagram doesn't fit SQLAlchemy.
The pool creates raw connections, not the GinoConnection
green in the diagram. The connection in the diagram is a many-to-one wrapper of
the raw connection, because of the reuse and lazy features, we'll get to that
part later. The connection is created by the engine, thus inherits the same
dialect, and is used for running queries.
On the outer side, SQLAlchemy queries can be executed directly on the engine or connection. When on engine, it will try to acquire a reusable connection to actually execute the connection, and release the connection after use.
注解
Another difference to SQLAlchemy here: GINO execution methods always return final results, while in SQLAlchemy accessing the result may cause further implicit database accesses. Therefore GINO engine immediately releases the connection when the execution method on the engine returns, but SQLAlchemy can only release the connection implicitly when the result data is found exhausted.
By immediately releasing a connection, GINO may not release the related raw connection when the raw connection was reused from another parent connection. We'll get to this later.
GINO also supports implicit execution
without having to specify an engine or connection explicitly. This is done by
binding the engine to the db
instance, also known as the
MetaData
or the Gino
instance.
You may possibly bind a GinoConnection
instance, but that
is greatly not recommended because it is very much untested.
At last, as the ORM / CRUD feature, models are just add-ons on top of
everything else to generate queries. The parent model class is connected to a
db
instance on creation, therefore the models can do implicit execution too
if their db
has a bind.
Then let's get to some details.
Creating Engines¶
GINO reuses the strategy system SQLAlchemy provides to create engines. The name
of GINO's strategy to create asynchronous GinoEngine
is
just gino
, but only available after gino
is imported:
import gino, sqlalchemy
async def main():
e = await sqlalchemy.create_engine('postgresql://...', strategy='gino')
# e is a GinoEngine
小技巧
Please read this SQLAlchemy document to learn about writing database URLs.
Also the GINO strategy replaces the default driver of dialect postgresql://
from psycopg2
to asyncpg
, so that you don't have to replace the URL
as it may be shared between GINO and vanilla SQLAlchemy in parallel.
Alternatively, you can explicitly specify the driver to use by
postgresql+asyncpg://...
or just asyncpg://...
.
GINO also offers a shortcut as gino.create_engine()
, which only sets the
default strategy to gino
and does nothing more. So here is an identical
example:
import gino
async def main():
e = await gino.create_engine('postgresql://...')
# e is also a GinoEngine
As you may have noticed, when using the GINO strategy,
create_engine()
returns a coroutine, which must be awaited
for result. Because it will create a database connection pool behind the scene,
and actually making a few initial connections by default.
For it is just SQLAlchemy create_engine()
, the same rules of
parameters apply in GINO too. Well for now, GINO only supports a small amount
of all the parameters listed in SQLAlchemy document (we are working on it!):
For Dialect:
For Engine:
While these parameters are discarded by GINO:
In addition, keyword arguments for creating the underlying pool is accepted
here. In the case of asyncpg, they are from create_pool()
.
For example, we can create an engine without initial connections:
e = await gino.create_engine('postgresql://...', min_size=0)
Similar to SQLAlchemy, GINO also provides shortcut to create engine while setting it as a bind. In SQLAlchemy it is like this:
import sqlalchemy
metadata = sqlalchemy.MetaData()
metadata.bind = 'postgresql://...'
# or in short
metadata = sqlalchemy.MetaData('postgresql://...')
This implicitly calls create_engine()
under the hood. However
in GINO, creating an engine requires await
, it can no longer be hidden
behind a normal assignment statement. Therefore, GINO removed the assignment
magic in subclass Gino
, reverted it to simple assignment:
import gino
db = gino.Gino()
async def main():
# db.bind = 'postgresql://...' doesn't work!! It sets a string on bind
engine = await gino.create_engine('postgresql://...')
db.bind = engine
And provided a shortcut to do so:
engine = await db.set_bind('postgresql://...')
And another simpler shortcut for one-time usage:
db = await gino.Gino('postgresql://...')
To unset a bind and close the engine:
engine, db.bind = db.bind, None
await engine.close()
Or with a shortcut correspondingly:
await engine.pop_bind().close()
Furthermore, the two steps can be combined into one shortcut with asynchronous context manager:
async with db.with_bind('postgresql://...') as engine:
# your code here
Managing Connections¶
With a GinoEngine
at hand, you can acquire connections
from the pool now:
conn = await engine.acquire()
Don't forget to release it after use:
await conn.release()
Yes this can be easily missing. The recommended way is to use the asynchronous context manager:
async with engine.acquire() as conn:
# play with the connection
Here conn
is a GinoConnection
instance. As mentioned
previously, GinoConnection
is mapped to an underlying raw
connection, as shown in following diagram:

Each column has at most one actual raw connection, and the number is the
sequence the connections are created in this example. It is designed this way
so that GINO could offer two features for connection management: reuse
and
lazy
. They are keyword arguments on acquire()
and by default switched off.
reuse¶
When acquiring a GinoConnection
(2), GINO will borrow a
raw connection (1) from the underlying pool first, and assign it to this
GinoConnection
(2). This is the default behavior of
acquire()
with no arguments given. Even when
you are nesting two acquires, you still get two actual raw connection
borrowed:
async with engine.acquire() as conn1:
async with engine.acquire() as conn2:
# conn2 is a completely different connection than conn1
But sometimes conn2
may exist in a different method:
async def outer():
async with engine.acquire() as conn1:
await inner()
async def inner():
async with engine.acquire() as conn2:
# ...
And we probably wish inner
could reuse the same raw connection in
outer
to save some resource, or borrow a new one if inner
is
individually called without outer
:
async def outer():
async with engine.acquire() as conn1:
await inner(conn1)
async def inner(conn2=None):
if conn2 is None:
async with engine.acquire() as conn2:
# ...
else:
# the same ... again
This is exactly the scenario reuse
could be useful. We can simply tell the
acquire()
to reuse the most recent reusable
connection in current context by setting reuse=True
, as presented in this
identical example:
async def outer():
async with engine.acquire() as conn1:
await inner(conn1)
async def inner():
async with engine.acquire(reuse=True) as conn2:
# ...
Back to previous diagram, the blue GinoConnection
instances (3, 4, 6) are "reusing connections" acquired with reuse=True
,
while the green ones (2, 5, 7) are not, thus they become "reusable
connections". The green reusable connections are put in a stack in current
context, so that acquire(reuse=True)
always reuses the most recent
connection at the top of the stack. For example, (3) and (4) reuse the only
available (2) at that moment, therefore (2, 3, 4) all map to the same raw
connection (1). Then after (5), (6) no longer reuses (2) because (5) is now the
new head of the stack.
小技巧
By context, we are actually referring to the context concept in
contextvars the
new module in Python 3.7, and its partial backport aiocontextvars. Simply speaking, you may
treat a series of function calls in a chain as in the same context, even if
there is an await
. It's something like a thread local in asyncio.
GinoConnection
(2) may be created through
acquire(reuse=True)
too - because the stack is empty before (2), there is
nothing to reuse, so (2) upgraded itself to a reusable connection.
Releasing a reusing connection won't cause the reused raw connection being
returned to the pool, only directly releasing the reused
GinoConnection
can do so. Connections should be released
in the reversed order as they are acquired, but if the reused connection is
released before reusing connections by accident, then all the reusing
connections depending on it will turn closed because they are reusing the same
raw connection which is returned to the pool, any further execution will fail.
For example, if (3) is released first, then (2) and (4) are still functional.
But if (2) is released first, then (3) and (4) will be released implicitly and
are no longer usable any more.
lazy¶
As you may have found, GinoConnection
(5) does not have
an underlying raw connection, even when it is reused by (6). This is because
both (5) and (6) set lazy=True
on acquire.
A lazy connection will not borrow a raw connection on creation, it will only do
so when have to, e.g. when executing a query or starting a transaction. For
example, GinoConnection
(7) is acquired lazily without a
raw connection, and (8) is only created when a query is executed on (7):
async with engine.acquire(lazy=True) as conn: # (7)
await conn.scalar('select now()') # (8)
On implementation level, lazy
is extremely easy in
acquire()
: if lazy=False
then borrow a raw
connection, else do nothing. That's it. Before executing a query or starting a
transaction, GinoConnection
will always try to borrow a
raw connection if there is none present. This allows GINO to "transiently
release" a raw connection, while all GinoConnection
mapped to this raw connection are put in lazy mode (again). This is especially
useful before you need to run some networking tasks in a database-related
context - the networking task may take a long time to finish, we don't want to
waste a connection resource checked out for nothing. For example:
async with engine.acquire(lazy=True) as conn: # (7)
await conn.scalar('select now()') # (8)
await conn.release(permanent=False) # release (8)
await asyncio.sleep(10) # simulate long I/O work
await conn.scalar('select now()') # re-acquire a new raw connection,
# not necessarily the same (8)
When used together with reuse
, at most one raw connection may be borrowed
for one reusing chain. For example, executing queries on both (5) and (6) will
result only one raw connection checked out, no matter which executes first. It
is also worth noting that, if we set lazy=False
on (6), then the raw
connection will be immediately borrowed on acquire, and shared between both (5)
and (6). It's been quite a while, let me post the same diagram again:

reusable¶
Usually, you don't have to worry about the two options reuse
and lazy
,
using the default acquire()
will always create
a concrete GinoConnection
with a new raw connection with
it. It is only that they are by default reusable (the green ones). If you need
an absolutely isolated unique connection that has no risk being reused, you may
use reusable=False
on acquire. As shown in the diagram, the unreusable
GinoConnection
is an orphan away from any stack:
async with engine.acquire(): # (2)
async with engine.acquire(reusable=False): # the unreusable connection
async with engine.acquire(reuse=True): # (3)
Unreusable connections can be lazy. But it is usually meaningless to specify
both reuse=True
and reusable=False
at the same time, because reusing
connections are always unusable - they are also not in the stack. You cannot
reuse a reusing connection, you only reuse a reusable connection in the stack.
Making a reusing connection unreusable doesn't make its related reusable
connection unreusable. Hmm if this is getting more confusing, just don't use
acquire(reuse=True, reusable=False)
unless you know what it does.
current_connection¶
Except for all scenarios supported by above three options, there is still one
left out: we may want to acquire a reusing-only connection. There is no such
option to do so, but GINO could do the same thing through
current_connection
which is always the reusable
GinoConnection
at the top of current stack, or None
if current stack is empty.
小技巧
The different between current_connection
and acquire(reuse=True)
is, the
latter always produces a GinoConnection
, while the
former may not.
Executing Queries¶
Once you have a GinoConnection
instance, you can start
executing queries with it. There are 6 variants of the execute method:
all()
,
first()
,
one()
,
one_or_none()
,
scalar()
and
status()
. They are basically the same:
accepting the same parameters, calling the same underlying methods. The
difference is how they treat the results:
all()
returns all results in alist
, which may be empty when the query has no result, empty but still alist
.first()
returns the first result directly, orNone
if there is no result at all. There is usually some optimization behind the scene to efficiently get only the first result, instead of loading the full result set into memory.one()
returns exactly one result. If there is no result at all or if there are multiple results, an exception is raised.one_or_none()
is similar toone()
, but it returnsNone
if there is no result instead or raising an exception.scalar()
is similar tofirst()
, it returns the first value of the first result. Quite convenient to just retrieve a scalar value from database, likeNOW()
,MAX()
,COUNT()
or whatever generates a single value.None
is also returned when there is no result, it is up to you how to distinguish no result and the first value isNULL
.status()
executes the query and discard all the query results at all. Instead it returns the execution status line as it is, usually a textual string. Note, there may be no optimization to only return the status without loading the results, so make your query generate nothing if you don't want any result.
By "result", I meant RowProxy
of SQLAlchemy - an
immutable row instance with both tuple
and dict
interfaces.
Database values are translated twice before they are eventually stored in a
RowProxy
: first by the database driver (dialect)
from network payload to Python objects (see Type Conversion of
how asyncpg does this), second by SQLAlchemy
result_processor()
depending on the actual
type and dialect.
The arguments taken by these 4 methods are identical to the ones accepted by
SQLAlchemy execute()
(click to read more),
usually a plain string of SQL directly or a SQLAlchemy query clause, followed
by query parameters. In the case when multiple dictionaries are given to
multiparams
, all 4 methods will always return None
discarding all
results. Likewise, the parameter values are processed twice too: first by
bind_processor()
then the database driver.
GINO also supports SQLAlchemy
execution_options()
provided either on
engine level
,
connection level
or on
queries
. At
the moment we are working on being compatible with SQLAlchemy execution
options. In the mean while, GINO provides several new execution options, for
example enabling return_model
and providing a model
will make
all()
and
first()
return ORM model instance(s) instead
of RowProxy
instance(s). See also
execution_options()
for more information.
In addition, GINO has an iterate()
method to
traverse the query results progressively, instead of loading all the results at
once. This method takes the same arguments as the other 4 execute methods do,
and follows the same rule of data handling. For now with asyncpg, this creates
a server-side cursor.
Implicit Execution¶
Acquire a GinoConnection
and execute queries on it, that
is the most explicit way. You can also execute queries on a
GinoEngine
instance. In this case, a connection will be
acquired with reuse=True
for you implicitly, and released after returning:
await engine.scalar('select now()')
Equals to:
async with engine.acquire(reuse=True) as conn:
await conn.scalar('select now()')
This allows you to easily write connectionless code. For example:
async def get_now():
return await engine.scalar('select now()')
async def main():
async with engine.acquire():
now = await get_now()
await engine.status('UPDATE ...')
In this example, main()
will take only one raw connection. get_now()
can also work alone out of any acquire()
context, thanks to reuse
.
Furthermore, GINO provides the same query APIs on Gino
directly. They are simply delegates to corresponding API methods on the
bind
. This allows even engine-less programming:
db = gino.Gino()
async def get_now():
return await db.scalar('select now()')
async def main():
async with db.with_bind('postgresql://...'):
now = await get_now()
await db.status('UPDATE ...')
注解
In this example we didn't put the two queries in an acquire()
block, so
they might be executed in two different connections.
At last, the SQLAlchemy implicit execution
on queries also work in GINO, under an extension named gino
:
await users_table.select().gino.all()
By default, the extension GinoExecutor
is injected on
Executable
as a property of name gino
at the creation of Gino
instance. Therefore, any
Executable
object has the gino
property for implicit execution. Similarly, the execution methods calls the
corresponding ones on the bind
of the db
instance.
Transaction¶
It is crucial to correctly manage transactions in an asynchronous program,
because you never know how much time an await
will actually wait for, it
will cause disasters if transactions are on hold for too long. GINO enforces
explicit transaction management to help dealing with it.
Basic usage¶
Transactions belong to GinoConnection
. The most common
way to use transactions is through an async with
statement:
async with connection.transaction() as tx:
await connection.status('INSERT INTO mytable VALUES(1, 2, 3)')
This guarantees a transaction is opened when entering the async with
block,
and closed when exiting the block - committed if exits normally, or rolled back
by exception. The underlying transaction instance from the database driver is
available at raw_transaction
, but in
most cases you don't need to touch it.
GINO provides two convenient shortcuts to end the transaction early:
They will raise an internal exception to correspondingly commit or rollback the
transaction, thus the code within the async with
block after
raise_commit()
or
raise_rollback()
is skipped. The
internal exception is inherited from BaseException
so that normal try
... except Exception
block can't trap it. This exception stops propagating at
the end of async with
block, so you don't need to worry about handling it.
Transactions can also be started on a GinoEngine
:
async with engine.transaction() as tx:
await engine.status('INSERT INTO mytable VALUES(1, 2, 3)')
Here a GinoConnection
is borrowed implicitly before
entering the transaction, and guaranteed to be returned after transaction is
done. The GinoConnection
instance is accessible at
tx.connection
. Other than
that, everything else is the same.
重要
The implicit connection is by default borrowed with reuse=True
. That
means using transaction()
of
GinoEngine
within a connection context is the same as
calling transaction()
of the current
connection without having to reference it, no separate connection shall be
created.
Similarly, if your Gino
instance has a bind, you may also do
the same on it:
async with db.transaction() as tx:
await db.status('INSERT INTO mytable VALUES(1, 2, 3)')
Nested Transactions¶
Transactions can be nested, nested transaction will create a savepoint as for now on asyncpg. A similar example from asyncpg doc:
async with connection.transaction() as tx1:
await connection.status('CREATE TABLE mytab (a int)')
# Create a nested transaction:
async with connection.transaction() as tx2:
await connection.status('INSERT INTO mytab (a) VALUES (1), (2)')
# Rollback the nested transaction:
tx2.raise_rollback()
# Because the nested transaction was rolled back, there
# will be nothing in `mytab`.
assert await connection.all('SELECT a FROM mytab') == []
As you can see, the raise_rollback()
breaks only the async with
block of the specified tx2
, the outer
transaction tx1
just continued. What if we break the outer transaction from
within the inner transaction? The inner transaction context won't trap the
internal exception because it recognizes the exception is not created upon
itself. Instead, the inner transaction context only follows the behavior to
either commit or rollback, and lets the exception propagate.
Because of the default reusing behavior, transactions on engine or db
follows the same nesting rules. Please see
GinoTransaction
for more information.
Manual Control¶
Other than using async with
, you can also manually control the
transaction:
tx = await connection.transaction()
try:
await db.status('INSERT INTO mytable VALUES(1, 2, 3)')
await tx.commit()
except Exception:
await tx.rollback()
raise
You can't use raise_commit()
or
raise_rollback()
here, similarly it is
prohibited to use commit()
and
rollback()
in an async with
block.
CRUD¶
THIS IS A WIP
Loaders and Relationship¶
Loaders are used to load database row results into objects.
GINO doesn't support automated relationship. We insist explicit code style in asynchronous programming and that conflicts with some usual ORM relationship patterns. Instead, GINO provides a rich loader system to assist you with manual relationships through foreign keys or whatever magic. That means, you are responsible for writing the queries, and GINO could assemble objects for you from the database result with loaders you defined.
Model Loader¶
The Model Loader is the magic behind GINO CRUD to translate database rows into model objects. Through CRUD, Model Loaders are assembled internally for you, you can still use it directly. For example, an ordinary query that returns rows may look like this:
query = db.select([User])
rows = await query.gino.all()
In order to load rows into User
objects, you can provide an execution
option loader
with a new ModelLoader
instance:
from gino.loader import ModelLoader
query = db.select([User])
query = query.execution_options(loader=ModelLoader(User))
users = await query.gino.all()
The ModelLoader
would then load each database row into a
User
object. As this is frequently used, GINO made it a shortcut:
query = db.select([User])
query = query.execution_options(loader=User.load())
users = await query.gino.all()
And another shortcut:
query = db.select([User])
query = query.execution_options(loader=User)
users = await query.gino.all()
小技巧
User
as loader
is transformed into ModelLoader(User)
by
Loader.get()
, explained later in "Loader
Expression".
And again:
query = db.select([User])
users = await query.gino.load(User).all()
This is identical to the normal CRUD query:
users = await User.query.gino.all()
Loader Expression¶
So Loaders are actually row post-processors, they define how the database rows
should be processed and returned. Other than ModelLoader
,
there're also other loaders that could turn the database rows into different
results like based on your definition. GINO provides the Loader Expression
feature for you to easily assemble complex loaders.
Here is an example using all loaders at once:
uid, user, sep, cols = await db.select([User]).gino.load(
(
User.id,
User,
'|',
lambda row, ctx: len(row),
)
).first()
Let's check this piece by piece. Overall, the argument of
load()
is a tuple. This is interpreted into a
TupleLoader
, with each item of the tuple interpreted as a
Loader Expression recursively. That means, it is possible to nest tuples. The
result of a TupleLoader
is a tuple.
Column
in Loader Expressions are interpreted as
ColumnLoader
. It simply outputs the value of the given
column in the database row. It is your responsibility to select the column in
the query. Please note, ColumnLoader
uses the given
column as index to look for the value, not the name of the column. This is a
SQLAlchemy feature to support selecting multiple columns with the same name
from different tables in the same query, especially for ORM. So if you are
using raw textual SQL and wishing to use ColumnLoader
,
you'll have to declare columns for the query:
now = db.Column('time', db.DateTime())
result = await db.first(db.text(
'SELECT now() AT TIME ZONE \'UTC\''
).columns(
now,
).gino.load(
('now:', now)
).first()
print(result) # now: 2018-04-08 08:23:02.431847
Let's get back to previous example. The second item in the tuple is a GINO
model class. As we've presented previously, it is interpreted into a
ModelLoader
. By default, it loads the values of all the
columns of the give model, and create a new model instance with the values.
小技巧
For a complex loader expression, the same row is given to all loaders, so
it doesn't matter User.id
is already used before the model loader.
The last item in the tuple is a callable, it will be called for each row with
two arguments: the first argument is the row itself, while the second is a
contextual value provided by outer loader, we'll get to that later. Similar to
map()
, the return value of the call will be the loaded result.
At last, if none of the above types matches a Loader Expression, it will be
treated as is. Like the '|'
separator, it will show up as the third item
in every result returned by the query.
Many-to-One Relationship¶
A classic many-to-one relationship is also known as referencing - the model on the "many" end keeps a single reference to the model on the "one" end. Although GINO does not enforce it, usually people use a foreign key for the reference:
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
So every child has a single parent (or no parent at all), while one parent may have multiple children. GINO provides an easy way to load children with their parents:
async for child in Child.load(parent=Parent).gino.iterate():
print(f'Parent of {child.id} is {child.parent.id}')
As you may have noticed, Child.load
is exactly the shortcut to create
ModelLoader
in the very first example. With some
additional keyword arguments, Child.load(parent=Parent)
is still a
ModelLoader
for Child
, the model loader is at the
same time a query builder. It is identical to do this:
async for child in Child.load(parent=Parent).query.gino.iterate():
print(f'Parent of {child.id} is {child.parent.id}')
The query
dynamically generates a SQLAlchemy query
based on the knowledge of the loader, and set the loader as execution option at
the same time. The Loader
simply forwarded unknown
attributes to its query
, that's why .query
can
be omitted.
For ModelLoader
, all keyword arguments are interpreted as
subloaders, their results will be set to the attributes of the result model
under the corresponding keys using setattr()
. For example, Parent
is
interpreted as ModelLoader(Parent)
which loads Parent
instances, and
Parent
instances are set as the parent
attribute of the outer Child
instance.
警告
If multiple children references the same parent, then each child owns a unique parent instance with identical values.
小技巧
You don't have to define parent
attribute on Child
. But if you do,
you gain the ability to customize how parent is stored or retrieved. For
example, let's store the parent instance as _parent
:
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
_parent = None
@property
def parent(self):
return self._parent
@parent.setter
def parent(self, value):
self._parent = value
The query builder works recursively. For ModelLoader
, it
uses LEFT OUTER JOIN
to connect the FROM
clauses, in order to achieve
many-to-one scenario. The ON
clause is determined automatically by foreign
keys. You can also customize the ON
clause in case there is no foreign key
(a promise is a promise):
loader = Child.load(parent=Parent.on(Child.parent_id == Parent.id))
async for child in loader.query.gino.iterate():
print(f'Parent of {child.id} is {child.parent.id}')
And subloaders can be nested:
subloader = Child.load(parent=Parent.on(Child.parent_id == Parent.id))
loader = Grandson.load(parent=subloader.on(Grandson.parent_id == Child.id))
By now, GINO supports only loading many-to-one joined query. To modify a relationship, just modify the reference column values.
Self Referencing¶
警告
Experimental feature.
Self referencing is usually used to create a tree-like structure. For example:
class Category(db.Model):
__tablename__ = 'categories'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('categories.id'))
In order to load leaf categories with their parents, an alias is needed:
Parent = Category.alias()
Then the query would be something like this:
parents = db.select([Category.parent_id])
query = Category.load(parent=Parent.on(
Category.parent_id == Parent.id
)).where(
~Category.id.in_(db.select([Category.alias().parent_id]))
)
async for c in query.gino.iterate():
print(f'Leaf: {c.id}, Parent: {c.parent.id}')
The generated SQL looks like this:
SELECT categories.id, categories.parent_id, categories_1.id, categories_1.parent_id
FROM categories LEFT OUTER JOIN categories AS categories_1
ON categories.parent_id = categories_1.id
WHERE categories.id NOT IN (
SELECT categories_2.parent_id
FROM categories AS categories_2
)
Other Relationships¶
GINO 0.7.4 introduced an experimental distinct feature to reduce a result set with loaders, combining rows under specified conditions. This made it possible to build one-to-many relationships. Using the same parent-child example above, we could load distinct parents with all their children:
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
def __init__(self, **kw):
super().__init__(**kw)
self._children = set()
@property
def children(self):
return self._children
@children.setter
def add_child(self, child):
self._children.add(child)
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
query = Child.outerjoin(Parent).select()
parents = await query.gino.load(
Parent.distinct(Parent.id).load(add_child=Child)).all()
Here the query is still child outer-joining parent, but the loader is loading
parent instances with distinct IDs only, while storing all their children
through the add_child
setter property. In detail for each row, a parent
instance is firstly loaded if no parent instance with the same ID was loaded
previously, or the same parent instance will be reused. Then a child instance
is loaded from the same row, and fed to the possibly reused parent instance by
parent.add_child = new_child
.
Distinct loaders can be nested to load hierarchical data, but it cannot be used as a query builder to automatically generate queries.
GINO provides no additional support for one-to-one relationship - the user should make sure that the query produces rows of distinct instance pairs, and load them with regular GINO model loaders. When in doubt, the distinct feature can be used on both sides, but you'll have to manually deal with the conflict if more than one related instances are found. For example, we could keep only the last child for each parent:
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
def __init__(self, **kw):
super().__init__(**kw)
self._child = None
@property
def child(self):
return self._child
@child.setter
def child(self, child):
self._child = child
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
query = Child.outerjoin(Parent).select()
parents = await query.gino.load(
Parent.distinct(Parent.id).load(child=Child.distinct(Child.id))).all()
Similarly, you can build many-to-many relationships in the same way:
class Parent(db.Model):
__tablename__ = 'parents'
id = db.Column(db.Integer, primary_key=True)
def __init__(self, **kw):
super().__init__(**kw)
self._children = set()
@property
def children(self):
return self._children
@children.setter
def add_child(self, child):
self._children.add(child)
child._parents.add(self)
class Child(db.Model):
__tablename__ = 'children'
id = db.Column(db.Integer, primary_key=True)
def __init__(self, **kw):
super().__init__(**kw)
self._parents = set()
@property
def parents(self):
return self._parents
class ParentXChild(db.Model):
__tablename__ = 'parents_x_children'
parent_id = db.Column(db.Integer, db.ForeignKey('parents.id'))
child_id = db.Column(db.Integer, db.ForeignKey('children.id'))
query = Parent.outerjoin(ParentXChild).outerjoin(Child).select()
parents = await query.gino.load(
Parent.distinct(Parent.id).load(add_child=Child.distinct(Child.id))).all()
Likewise, there is for now no way to modify the relationships automatically,
you'll have to manually create, delete or modify ParentXChild
instances.
Advanced Usage of Loaders¶
You could use combined loaders flexibly in complex queries - loading relationships is just one special use case. For example, you could load the count of visits at the same time of loading each user, by using a tuple loader with two items - model loader for the user, and column loader for the count:
import asyncio
import random
import string
import gino
from gino.loader import ColumnLoader
db = gino.Gino()
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.Unicode())
class Visit(db.Model):
__tablename__ = 'visits'
id = db.Column(db.Integer(), primary_key=True)
time = db.Column(db.DateTime(), server_default='now()')
user_id = db.Column(db.ForeignKey('users.id'))
async def main():
async with db.with_bind('postgresql://localhost/gino'):
await db.gino.create_all()
for i in range(random.randint(5, 10)):
u = await User.create(
name=''.join(random.choices(string.ascii_letters, k=10)))
for v in range(random.randint(10, 20)):
await Visit.create(user_id=u.id)
visits = db.func.count(Visit.id)
q = db.select([
User,
visits,
]).select_from(
User.outerjoin(Visit)
).group_by(
*User,
).gino.load((User, ColumnLoader(visits)))
async with db.transaction():
async for user, visits in q.iterate():
print(user.name, visits)
await db.gino.drop_all()
asyncio.run(main())
Using alias to get ID-ascending pairs from the same table:
ua1 = User.alias()
ua2 = User.alias()
join_query = select([ua1, ua2]).where(ua1.id < ua2.id)
loader = ua1.load('id'), ua2.load('id')
result = await join_query.gino.load(loader).all()
print(result) # e.g. [(1, 2), (1, 3), (2, 3)]
Potentially there could be a lot of different use cases of loaders. We'll add more inspiration here in the future.
Advanced Topics¶
Why Asynchronous ORM?¶
Normally the answer is no, you don't need an asynchronous ORM. Before moving on, you should read this blog post from Mike Bayer, the author of SQLAlchemy. Though it was written before the project uvloop, but his points are still valid now:
Async is usually not essential for stereotypical database logic;
Async is slower, not faster, measured in single-routine.
Next, we'll roughly go through what is asynchronous I/O again, and its pros/cons in practice, as well as when asynchronous ORM can be helpful.
The Story¶
Let's say we want to build a search engine. We'll use a single core computer to build our index. To make things simpler, our tasks are to fetch web pages (I/O operation), and process their content (CPU operation). Each task looks like this:

We have lots of web pages to index, so we simply handle them one by one:

Let's assume the time of each task is constant: each second, 2 tasks are done. Thus we can say what the throughput of the current system is 2 tasks/sec. How can we improve the throughput? An obvious answer is to add more CPU cores:

This simply doubles our throughput to 4 tasks/sec, and linearly scales as we add more CPU cores, if the network is not a bottleneck. But can we improve the throughput for each CPU core? The answer is yes, we can use multi-threading:

Wait a second! The 2 threads barely finished 6 tasks in 2 seconds, a throughput of only 2.7 tasks/sec, much lower than 4 tasks/sec with 2 cores. What's wrong with multi-threading? From the diagram we can see:
There are yellow bars taking up extra time.
The green bars can still overlap with any bar in the other thread, but
non-green bars cannot overlap with non-green bars in the other thread.
The yellow bars are time taken by context switches, a necessary part of allowing multiple threads or processes to run on a single CPU core concurrently. One CPU core can do only one thing at a time (let's assume a world without Hyper-threading or similar), so in order to run several threads concurrently the CPU must split its time into small slices, and run a little bit of each thread within these slices. The yellow bar is the overhead for the CPU to switch context to run a different thread. The scale is a bit dramatic, but it helps with the point.
Wait again here, the green bars are overlapping between threads. Is the CPU doing two things at the same time? No, the CPU is doing nothing in the middle of the green bar, because it's waiting for the HTTP response (I/O). That's how multi-threading could improve the throughput to 2.7 tasks/sec, instead of decreasing it to 1.7 tasks/sec. You may try in real to run CPU-intensive tasks with multi-threading on single core, there won't be any improvement. Like the multiplexed red bars (in practice there might be more context switches depending on the task), they appear to be running at the same time, but the total time for all to finish is actually longer than running the tasks one by one. That's also why this is called concurrency instead of parallelism.
As you might imagine, throughput will improve less with each additional thread, until throughput begins to decrease because context switches are wasting too much time, not to mention the extra memory footprint taken by new threads. It is usually not practical to have tens of thousands of threads running on a single CPU core. How, then, is it possible to have tens of thousands of I/O-bound tasks running concurrently on a single CPU core? This is the once-famous C10k problem, usually solved by asynchronous I/O:

注解
Asynchronous I/O and coroutines are two different things, but they usually go together. Here we will stick with coroutines for simplicity.
Awesome! The throughput is 3.7 tasks/sec, nearly as good as 4 tasks/sec of 2 CPU cores. Though this is not real data, compared to OS threads coroutines do take much less time to context switch and have a lower memory footprint, thus making them an ideal option for the C10k problem.
Cooperative multitasking¶
So what is a coroutine?
In the last diagram above, you may have noticed a difference compared to the previous diagrams: the green bars are overlapping within the same thread. That is because the in the last diagram, our code is using asynchronous I/O, whereas the previously we were using blocking I/O. As the name suggests, blocking I/O will block the thread until the I/O result is ready. Thus, there can be only one blocking I/O operation running in a thread at a time. To achieve concurrency with blocking I/O, either multi-threading or multi-processing must be used. In contrast, asynchronous I/O allows thousands (or even more) of concurrent I/O reads and writes within the same thread, with each I/O operation blocking only the coroutine performing the I/O rather than the whole thread. Like multi-threading, coroutines provide a means to have concurrency during I/O, but unlike multi-threading this concurrency occurs within a single thread.
Threads are scheduled by the operating system using an approach called preemptive multitasking. For example, in previous multi-threading diagram there was only one CPU core. When Thread 2 tried to start processing the first web page content, Thread 1 hadn't finished processing its own. The OS brutally interrupted Thread 1 and shared some resource (time) for Thread 2. But Thread 1 also needed CPU time to finish its processing at the same time, so in turn after a while the OS had to pause Thread 2 and resume Thread 1. Depending on the size of the task, such turns may happen several times, so that every thread may have a fair chance to run. It is something like this:
Thread 1: I wanna run!
OS: Okay, here you go...
Thread 2: I wanna run!
OS: Urh, alright one sec ... Thread 1, hold on for a while!
Thread 1: Well I'm not done yet, but you are the boss.
OS: It won't be long. Thread 2 it's your turn now.
Thread 2: Yay! (&%#$@..+*&#)
Thread 1: Can I run now?
OS: Just a moment please ... Thread 2, give it a break!
Thread 2: Alright ... but I really need the CPU.
OS: You'll have it later. Thread 1, hurry up!
In contrast, coroutines are scheduled by themselves cooperatively with the help of an event manager. The event manager lives in the same thread as the coroutines and unlike the OS scheduler that forces context switches on threads, the event manager acts only when coroutines pause themselves. A thread knows when it wants to run, but coroutines don't - only the event manager knows which coroutine should run. The event manager may only trigger the next coroutine to run after the previous coroutine yields control to wait for an event (e.g. wait for an HTTP response). This approach to achieve concurrency is called cooperative multitasking. It's like this:
Coroutine 1: Let me know when event A arrives. I'm done here before that.
Event manager: Okay. What about you, coroutine 2?
Coroutine 2: Um I've got nothing to do here before event B.
Event manager: Cool, I'll be watching.
Event manager: (after a while) Hey coroutine 1, event A is here!
Coroutine 1: Awesome! Let me see ... looks good, but I need event C now.
Event manager: Very well. Seems event B arrived just now, coroutine 2?
Coroutine 2: Oh wonderful! Let me store it in a file ... There! I'm all done.
Event manager: Sweet! Since there's no sign of event C yet, I'll sleep for a while.
(silence)
Event manager: Damn, event C timed out!
Coroutine 1: Arrrrh gotta kill myself with an exception :S
Event manager: Up to you :/
For coroutines, a task cannot be paused externally, the task can only pause itself from within. When there are a lot of coroutines, concurrency depends on each of them pausing from time to time to wait for events. If you wrote a coroutine that never paused, it would allow no concurrency at all when running because no other coroutine would have a chance to run. On the other hand, you can feel safe in the code between pauses, because no other coroutine can run at the same time to mess up shared states. That's why in the last diagram, the red bars are not interleaved like threads.
小技巧
In Python and asyncio, async def
declares coroutines, await
yields
control to event loop (event manager).
Pros and cons¶
Asynchronous I/O may handle tens of thousands of concurrent I/O operations in the same thread. This can save a lot of CPU time from context switching, and memory from multi-threading. Therefore if you are dealing with lots of I/O-bound tasks concurrently, asynchronous I/O can efficiently use limited CPU and memory to deliver greater throughput.
With coroutines, you can naturally write sequential code that is cooperatively scheduled. If your business logic is complex, coroutines could greatly improve readability of asynchronous I/O code.
However for a single task, asynchronous I/O can actually impair throughput. For
example, for a simple recv()
operation blocking I/O would just block until
returning the result, but for asynchronous I/O additional steps are required:
register for the read event, wait until event arrives, try to recv()
, repeat
until a result returns, and finally feed the result to a callback. With coroutines,
the framework cost is even larger. Thanks to uvloop this cost has been minimized
in Python, but it is still additional overhead compared to raw blocking I/O.
Timing in Asynchronous I/O is also less predictable because of its cooperative
nature. For example, in a coroutine you may want to sleep for 1 second. However,
if another coroutine received control and ran for 2 seconds, by the time we get
back to the first coroutine 2 seconds have already passed. Therefore, sleep(1)
means to wait for at least 1 second. In practice, you should try your best to make
sure that all code between await
finishes ASAP, being literally cooperative.
Still, there can be code outside your control, so it is important to keep this
unpredictibility of timing in mind.
Finally, asynchronous programming is complicated. Writing good asynchronous code is easier said than done, and debugging it is more difficult than debugging similar synchronous code. Especially when a whole team is working on the same piece of asynchronous code, it can easily go wrong. Therefore, a general suggestion is to use asynchronous I/O carefully for I/O-bound high concurrency scenarios only. It's not a drop-in that will provide a performance boost, but more like a sharp blade for concurrency with two edges. And if you are dealing with time-critical tasks, think again to be sure.
About Database and ORM¶
Finally, GINO. We assume a scenario that asynchronous I/O is anyway required for the server itself, regardless of how we handle the database.
We now know that asynchronous I/O is for I/O intensive tasks. But isn't it I/O intensive to frequently talk to a remote database? It depends. Like Mike said, "intensive" is relative to your actual code. Modern databases are very fast and reliable, network is reliable if put in LAN. If actual database access time is of the minority of the total time taken by the program, it is not I/O intensive. Using asynchronous I/O for database connections and queries in this case will not improve throughput much, and may make it worse due to asynchronous framework overheads mentioned earlier. It looks easier to just use blocking database operations in your coroutines instead without harming performance.
Using blocking operations in coroutines carries a high risk of causing dead locks. For example, imagine a coroutine starts a transaction and updates a row before yielding control. A second coroutine tries to update the same row before the first coroutine closes the transaction. This second coroutine will block on the non-async update, waiting for the row lock to be released and preventing any other coroutine running. However, releasing the lock is in the first coroutine which is now blocked by the second coroutine. Thus it will block forever.
This may happen even if you optimized all database interactions to be as quick as possible. Race conditions happen under pressure, and anything that can block will eventually block. Therefore, don't call blocking methods in coroutines, ever. (Unless you are 100% sure it won't cause a dead lock)
A simple fix would be to defer the database operations into threads, so that they won't block the main thread and thus won't cause a dead lock easily. It usually works and there is even a library to do so. However when it comes to ORM, things become dirty.
Traditional ORMs are usually implicit for convenience, lazy property loading
for example. In a larger project, you never know which statement has a side
effect to make an implicit database call, and block the main thread. Since you
cannot put only the underlying database access into the thread pool (you need
to await
on the deferred database call), you'll start putting pieces of
code into the thread pool. But because coroutines run only in the main thread, your
code starts to fall apart. This is usually the time when I suggest to separate
the server into two parts: "normal blocking with ORM" and "asynchronous without
ORM".
This is where GINO can be useful: it provides the convenience of database abstraction in a classic asynchronous context. And thanks to asyncpg, the asynchronous overhead is by far exceeded by its incredible performance boost.
Connection Pool¶
Other than the default connection pool, alternative pools can be used in
their own use cases.
There are options from dialects (currently only
NullPool
), and users can define their own pools.
The base class should be Pool
.
To use non-default pools in raw GINO:
from gino.dialects.asyncpg import NullPool
create_engine('postgresql://...', pool_class=NullPool)
To use non-default pools in extensions (taking Sanic as an example):
from gino.dialects.asyncpg import NullPool
from gino.ext.sanic import Gino
app = sanic.Sanic()
app.config.DB_HOST = 'localhost'
app.config.DB_KWARGS = dict(
pool_class=NullPool,
)
db = Gino()
db.init_app(app)
Sanic Support¶
THIS IS A WIP
Work with Sanic¶
Using the Sanic extension, the request handler acquires a lazy connection on each request, and return the connection when the response finishes by default.
The lazy connection is actually established if necessary, i.e. just before first access to db.
This behavior is controlled by app.config.DB_USE_CONNECTION_FOR_REQUEST, which is True by default.
Supported configurations:
DB_HOST
DB_PORT
DB_USER
DB_PASSWORD
DB_DATABASE
DB_ECHO
DB_POOL_MIN_SIZE
DB_POOL_MAX_SIZE
DB_USE_CONNECTION_FOR_REQUEST
DB_KWARGS
An example server:
from sanic import Sanic
from sanic.exceptions import abort
from sanic.response import json
from gino.ext.sanic import Gino
app = Sanic()
app.config.DB_HOST = 'localhost'
app.config.DB_DATABASE = 'gino'
db = Gino()
db.init_app(app)
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.BigInteger(), primary_key=True)
nickname = db.Column(db.Unicode())
def __repr__(self):
return '{}<{}>'.format(self.nickname, self.id)
@app.route("/users/<user_id>")
async def get_user(request, user_id):
if not user_id.isdigit():
abort(400, 'invalid user id')
user = await User.get_or_404(int(user_id))
return json({'name': user.nickname})
if __name__ == '__main__':
app.run(debug=True)
Sanic Support¶
To integrate with Sanic, a few configurations needs to be set in
app.config
(with default value though):
DB_HOST: if not set,
localhost
DB_PORT: if not set,
5432
DB_USER: if not set,
postgres
DB_PASSWORD: if not set, empty string
DB_DATABASE: if not set,
postgres
DB_ECHO: if not set,
False
DB_POOL_MIN_SIZE: if not set, 5
DB_POOL_MAX_SIZE: if not set, 10
DB_KWARGS; if not set, empty dictionary
An example:
from sanic import Sanic
from gino.ext.sanic import Gino
app = Sanic()
app.config.DB_HOST = 'localhost'
app.config.DB_USER = 'postgres'
db = Gino()
db.init_app(app)
After db.init_app
, a connection pool with configured settings shall be
created and bound to db
when Sanic server is started, and closed on stop.
Furthermore, a lazy connection context is created on each request, and released
on response. That is to say, within Sanic request handlers, you can directly
access db by e.g. User.get(1)
, everything else is settled: database pool is
created on server start, connection is lazily borrowed from pool on the first
database access and shared within the rest of the same request handler, and
automatically returned to the pool on response.
Please be noted that, in the async world, await
may block unpredictably for
a long time. When this world is crossing RDBMS pools and transactions, it is
a very dangerous bite for performance, even causing disasters sometimes.
Therefore we recommend, during the time enjoying fast development, do pay
special attention to the scope of transactions and borrowed connections, make
sure that transactions are closed as soon as possible, and connections are not
taken for unnecessarily long time. As for the Sanic support, if you want to
release the concrete connection in the request context before response is
reached, just do it like this:
await request['connection'].release()
Or if you prefer not to use the contextual lazy connection in certain handlers,
prefer explicitly manage the connection lifetime, you can always borrow a new
connection by setting reuse=False
:
async with db.acquire(reuse=False):
# new connection context is created
Or if you prefer not to use the builtin request-scoped lazy connection at all, you can simply turn it off:
app.config.DB_USE_CONNECTION_FOR_REQUEST = False
Tornado Support¶
THIS IS A WIP
Frequently Asked Questions¶
ORM or not ORM?¶
GINO does perform the Object-Relational Mapping work under the Data Mapper Pattern, but it is just not a traditional ORM. The Objects in GINO are completely stateless from database - they are pure plain Python objects in memory. Changing their attribute values does not make them "dirty" - or in a different way of thinking they are always "dirty". Any access to database must be explicitly executed. Using GINO is more like making up SQL clauses with Models and Objects, executing them to make changes in database, or loading data from database and wrapping the results with Objects again. Objects are just row data containers, you are still dealing with SQL which is represented by Models and SQLAlchemy core grammars. Besides GINO can be used in a completely non-ORM way.
How to join?¶
GINO invented none about making up queries, everything for that is inherited from SQLAlchemy. Therefore you just need to know how to write join in SQLAlchemy. Especially, Ádám made some amazing upgrades in GINO #113 to make join easier, so that you can use model classes directly as if they are tables in joining:
results = await User.join(Book).select().gino.all()
How to connect to database through SSL?¶
It depends on the dialect and database driver. For asyncpg, keyword arguments
on asyncpg.connect()
are directly
available on create_engine()
or db.set_bind()
. Therefore, enabling SSL is rather easy:
engine = await gino.create_engine(..., ssl=True)
What is aiocontextvars and what does it do?¶
It is a partial backport of the new built-in module contextvars introduced in Python
3.7. In Python 3.5 and 3.6, aiocontextvars
patches loop.create_task()
to copy context from caller as a workaround to simulate the same behavior. This
is also under discussion in upstream backport project, please read more here:
https://github.com/MagicStack/contextvars/issues/2
If you are using Python 3.7, then aiocontextvars
does nothing at all.
注解
This answer is for GINO 0.8 and later, please check earlier versions of this documentation if you are using GINO 0.7.
API 参考¶
gino package¶
Subpackages¶
gino.dialects package¶
Submodules¶
-
class
gino.dialects.asyncpg.
AsyncEnum
(*enums, **kw)[源代码]¶ 基类:
sqlalchemy.dialects.postgresql.base.ENUM
Construct an
ENUM
.Arguments are the same as that of
types.Enum
, but also including the following parameters.- 参数
create_type -- Defaults to True. Indicates that
CREATE TYPE
should be emitted, after optionally checking for the presence of the type, when the parent table is being created; and additionally thatDROP TYPE
is called when the table is dropped. WhenFalse
, no check will be performed and noCREATE TYPE
orDROP TYPE
is emitted, unlesscreate()
ordrop()
are called directly. Setting toFalse
is helpful when invoking a creation scheme to a SQL file without access to the actual database - thecreate()
anddrop()
methods can be used to emit SQL to a target bind.
-
class
gino.dialects.asyncpg.
AsyncpgCompiler
(dialect, statement, column_keys=None, inline=False, **kwargs)[源代码]¶ 基类:
sqlalchemy.dialects.postgresql.base.PGCompiler
Construct a new
SQLCompiler
object.- 参数
dialect --
Dialect
to be usedstatement --
ClauseElement
to be compiledcolumn_keys -- a list of column names to be compiled into an INSERT or UPDATE statement.
inline -- whether to generate INSERT statements as "inline", e.g. not formatted to return any generated defaults
kwargs -- additional keyword arguments to be consumed by the superclass.
-
property
bindtemplate
¶
-
class
gino.dialects.asyncpg.
AsyncpgDBAPI
[源代码]¶ 基类:
gino.dialects.base.BaseDBAPI
-
Error
= (<class 'asyncpg.exceptions._base.PostgresError'>, <class 'asyncpg.exceptions._base.InterfaceError'>)¶
-
-
class
gino.dialects.asyncpg.
AsyncpgDialect
(*args, **kwargs)[源代码]¶ 基类:
sqlalchemy.dialects.postgresql.base.PGDialect
,gino.dialects.base.AsyncDialectMixin
-
colspecs
= {<class 'sqlalchemy.sql.sqltypes.Interval'>: <class 'sqlalchemy.dialects.postgresql.base.INTERVAL'>, <class 'sqlalchemy.sql.sqltypes.Enum'>: <class 'gino.dialects.asyncpg.AsyncEnum'>, <class 'sqlalchemy.sql.sqltypes.JSON.JSONPathType'>: <class 'sqlalchemy.dialects.postgresql.json.JSONPathType'>, <class 'sqlalchemy.sql.sqltypes.JSON'>: <class 'sqlalchemy.dialects.postgresql.json.JSON'>, <class 'sqlalchemy.sql.sqltypes.ARRAY'>: <class 'sqlalchemy.dialects.postgresql.array.ARRAY'>, <class 'sqlalchemy.dialects.postgresql.base.ENUM'>: <class 'gino.dialects.asyncpg.AsyncEnum'>, <class 'sqlalchemy.sql.sqltypes.NullType'>: <class 'gino.dialects.asyncpg.GinoNullType'>}¶
-
cursor_cls
¶ DBAPICursor
的别名
-
dbapi_class
¶ AsyncpgDBAPI
的别名
-
driver
= 'asyncpg'¶
-
execution_ctx_cls
¶
-
init_kwargs
= {'command_timeout', 'connection_class', 'database', 'host', 'init', 'loop', 'max_cacheable_statement_size', 'max_cached_statement_lifetime', 'max_inactive_connection_lifetime', 'max_queries', 'max_size', 'min_size', 'passfile', 'password', 'port', 'server_settings', 'setup', 'ssl', 'statement_cache_size', 'timeout', 'user'}¶
-
on_connect
()[源代码]¶ return a callable which sets up a newly created DBAPI connection.
This is used to set dialect-wide per-connection options such as isolation modes, unicode modes, etc.
If a callable is returned, it will be assembled into a pool listener that receives the direct DBAPI connection, with all wrappers removed.
If None is returned, no listener will be generated.
-
statement_compiler
¶ AsyncpgCompiler
的别名
-
supports_native_decimal
= True¶
-
-
class
gino.dialects.asyncpg.
AsyncpgExecutionContext
[源代码]¶ 基类:
gino.dialects.base.ExecutionContextOverride
,sqlalchemy.dialects.postgresql.base.PGExecutionContext
-
class
gino.dialects.asyncpg.
DBAPICursor
(dbapi_conn)[源代码]¶ 基类:
gino.dialects.base.DBAPICursor
-
property
description
¶
-
property
-
class
gino.dialects.asyncpg.
GinoNullType
[源代码]¶ 基类:
sqlalchemy.sql.sqltypes.NullType
-
result_processor
(dialect, coltype)[源代码]¶ Return a conversion function for processing result row values.
Returns a callable which will receive a result row column value as the sole positional argument and will return a value to return to the user.
If processing is not necessary, the method should return
None
.- 参数
dialect -- Dialect instance in use.
coltype -- DBAPI coltype argument received in cursor.description.
-
-
class
gino.dialects.asyncpg.
Transaction
(tx)[源代码]¶ 基类:
gino.dialects.base.Transaction
-
property
raw_transaction
¶
-
property
-
class
gino.dialects.base.
AsyncDialectMixin
[源代码]¶ 基类:
object
-
cursor_cls
¶ DBAPICursor
的别名
-
-
class
gino.dialects.base.
BaseDBAPI
[源代码]¶ 基类:
object
-
Error
¶ builtins.Exception
的别名
-
paramstyle
= 'numeric'¶
-
-
class
gino.dialects.base.
ExecutionContextOverride
[源代码]¶ 基类:
object
-
loader
¶
-
model
¶
-
return_model
¶
-
timeout
¶
-
Module contents¶
gino.ext package¶
Submodules¶
-
class
gino.ext.aiohttp.
AiohttpStrategy
[源代码]¶ 基类:
gino.strategies.GinoStrategy
-
engine_cls
¶ GinoEngine
的别名
-
name
= 'aiohttp'¶
-
-
class
gino.ext.aiohttp.
Gino
(bind=None, model_classes=None, query_ext=True, schema_ext=True, ext=True, **kwargs)[源代码]¶ -
Support aiohttp.web server.
The common usage looks like this:
from aiohttp import web from gino.ext.aiohttp import Gino db = Gino() app = web.Application(middlewares=[db]) db.init_app(app)
By
init_app()
GINO subscribes to a few signals on aiohttp, so that GINO could use database configuration to initialize the bound engine.The configuration can be passed in the
config
parameter ofinit_app
, or if that is not set, in app['config']['gino'], both of which should be a dictionary.The config includes:
driver
- the database driver, default isasyncpg
.host
- database server host, default islocalhost
.port
- database server port, default is5432
.user
- database server user, default ispostgres
.password
- database server password, default is empty.database
- database name, default ispostgres
.dsn
- a SQLAlchemy database URL to create the engine, its existence will replace all previous connect arguments.pool_min_size
- the initial number of connections of the db pool.pool_max_size
- the maximum number of connections in the db pool.echo
- enable SQLAlchemy echo mode.ssl
- SSL context passed toasyncpg.connect
, default isNone
.kwargs
- other parameters passed to the specified dialects, likeasyncpg
. Unrecognized parameters will cause exceptions.
If the
db
is set as an aiohttp middleware, then a lazy connection is available atrequest['connection']
. By default, a database connection is borrowed on the first query, shared in the same execution context, and returned to the pool on response. If you need to release the connection early in the middle to do some long-running tasks, you can simply do this:await request['connection'].release(permanent=False)
- 参数
bind -- A
GinoEngine
instance to bind. Also accepts string orURL
, which will be passed tocreate_engine()
when thisGino
instance is awaited. Default isNone
.model_classes -- A
tuple
of base class and mixin classes to create theModel
class. Default is(CRUDModel, )
.query_ext -- Boolean value to control the installation of the
gino
extension onExecutable
for implicit execution. Default is to install (True
).schema_ext -- Boolean value to control the installation of the
gino
extension onSchemaItem
for implicit execution. Default is to install (True
).ext -- Boolean value to control the installation of the two
gino
extensions.False
for no extension at all, while it depends on the two individual switches when this is set toTrue
(default).kwargs -- Other arguments accepted by
MetaData
.
-
model_base_classes
= (<class 'gino.crud.CRUDModel'>, <class 'gino.ext.aiohttp.AiohttpModelMixin'>)¶
-
query_executor
¶ GinoExecutor
的别名
-
class
gino.ext.aiohttp.
GinoEngine
(dialect, pool, loop, logging_name=None, echo=None, execution_options=None)[源代码]¶ -
-
connection_cls
¶ GinoConnection
的别名
-
-
class
gino.ext.sanic.
Gino
(app=None, *args, **kwargs)[源代码]¶ -
Support Sanic web server.
By
init_app()
GINO registers a few hooks on Sanic, so that GINO could use database configuration in Sanicconfig
to initialize the bound engine.A lazy connection context is enabled by default for every request. You can change this default behavior by setting
DB_USE_CONNECTION_FOR_REQUEST
config value toFalse
. By default, a database connection is borrowed on the first query, shared in the same execution context, and returned to the pool on response. If you need to release the connection early in the middle to do some long-running tasks, you can simply do this:await request['connection'].release(permanent=False)
- 参数
bind -- A
GinoEngine
instance to bind. Also accepts string orURL
, which will be passed tocreate_engine()
when thisGino
instance is awaited. Default isNone
.model_classes -- A
tuple
of base class and mixin classes to create theModel
class. Default is(CRUDModel, )
.query_ext -- Boolean value to control the installation of the
gino
extension onExecutable
for implicit execution. Default is to install (True
).schema_ext -- Boolean value to control the installation of the
gino
extension onSchemaItem
for implicit execution. Default is to install (True
).ext -- Boolean value to control the installation of the two
gino
extensions.False
for no extension at all, while it depends on the two individual switches when this is set toTrue
(default).kwargs -- Other arguments accepted by
MetaData
.
-
model_base_classes
= (<class 'gino.crud.CRUDModel'>, <class 'gino.ext.sanic.SanicModelMixin'>)¶
-
query_executor
¶ GinoExecutor
的别名
-
class
gino.ext.sanic.
GinoEngine
(dialect, pool, loop, logging_name=None, echo=None, execution_options=None)[源代码]¶ -
-
connection_cls
¶ GinoConnection
的别名
-
-
class
gino.ext.sanic.
SanicStrategy
[源代码]¶ 基类:
gino.strategies.GinoStrategy
-
engine_cls
¶ GinoEngine
的别名
-
name
= 'sanic'¶
-
GINO provides a convenient plugin for integrating with Tornado webserver.
GINO can provide a web framework-aware .get_or_404()
method which work
similar to .get()
but raises an appropriate error whenever requested object
not found. In case of tornado, an appropriate error is
tornado.web.HTTPError(404)
.
To have it working, simply use gino.ext.tornado.Gino
as your
database metadata.
GINO provides two ways to initialize the db instances for tornado:
1) Initialize by db.init_app(app)
, and get the Gino
instance in the
RequestHandler
subclasses by self.application.db
.
2) For subclassed tornado Application, use gino.ext.tornado.DBMixin
and run init_db()
.
A mixin to provide a convenience property access to db using self.db
instead of self.application.db
.
A hello world application that uses tornado and GINO may look like this:
import ssl
import tornado.web
import tornado.ioloop
import tornado.options
import tornado.escape
from gino.ext.tornado import Gino, RequestHandlerMixin
# Define your database metadata
# -----------------------------
db = Gino()
# Define tables as you would normally do
# --------------------------------------
class User(db.Model):
__tablename__ = 'users'
id = db.Column(db.Integer(), primary_key=True, autoincrement=True)
nickname = db.Column(db.Unicode(), nullable=False)
# Now just use your tables
# ------------------------
class AllUsers(tornado.web.RequestHandler, RequestHandlerMixin):
async def get(self):
users = await User.query.gino.all()
for user in users:
url = self.application.reverse_url('user', user.id)
nickname = tornado.escape.xhtml_escape(user.nickname)
self.write(f'<a href="{url}">{nickname}</a><br/>')
class GetUser(tornado.web.RequestHandler, RequestHandlerMixin):
async def get(self, uid):
async with self.db.acquire() as conn:
async with conn.transaction():
user: User = await User.get_or_404(int(uid))
self.write(f'Hi, {user.nickname}!')
if __name__ == '__main__':
app = tornado.web.Application([
tornado.web.URLSpec(r'/', AllUsers, name='index'),
tornado.web.URLSpec(r'/users/(?P<uid>[0-9]+)', GetUser,
name='user')
], debug=True)
tornado.ioloop.IOLoop.current().run_sync(
lambda: db.init_app(app, ssl=True))
app.listen(8888)
tornado.ioloop.IOLoop.current().start()
-
class
gino.ext.tornado.
DBMixin
[源代码]¶ 基类:
object
A mixin for tornado.web.Application to initialize and have convenience methods for database accesses.
-
db
= None¶
-
-
class
gino.ext.tornado.
Gino
(bind=None, model_classes=None, query_ext=True, schema_ext=True, ext=True, **kwargs)[源代码]¶ -
Base class for GINO database.
Using this class as a metadata for your database adds an additional
get_or_404()
method to all of your table classes.- 参数
bind -- A
GinoEngine
instance to bind. Also accepts string orURL
, which will be passed tocreate_engine()
when thisGino
instance is awaited. Default isNone
.model_classes -- A
tuple
of base class and mixin classes to create theModel
class. Default is(CRUDModel, )
.query_ext -- Boolean value to control the installation of the
gino
extension onExecutable
for implicit execution. Default is to install (True
).schema_ext -- Boolean value to control the installation of the
gino
extension onSchemaItem
for implicit execution. Default is to install (True
).ext -- Boolean value to control the installation of the two
gino
extensions.False
for no extension at all, while it depends on the two individual switches when this is set toTrue
(default).kwargs -- Other arguments accepted by
MetaData
.
-
model_base_classes
= (<class 'gino.crud.CRUDModel'>, <class 'gino.ext.tornado.TornadoModelMixin'>)¶
-
query_executor
¶ GinoExecutor
的别名
-
class
gino.ext.tornado.
GinoEngine
(dialect, pool, loop, logging_name=None, echo=None, execution_options=None)[源代码]¶ -
-
connection_cls
¶ GinoConnection
的别名
-
-
class
gino.ext.tornado.
RequestHandlerMixin
[源代码]¶ 基类:
object
A mixin to provide convenience methods to access GINO object
-
property
db
¶
-
property
-
class
gino.ext.tornado.
TornadoStrategy
[源代码]¶ 基类:
gino.strategies.GinoStrategy
-
engine_cls
¶ GinoEngine
的别名
-
name
= 'tornado'¶
-
Module contents¶
Submodules¶
gino.api module¶
-
class
gino.api.
Gino
(bind=None, model_classes=None, query_ext=True, schema_ext=True, ext=True, **kwargs)[源代码]¶ 基类:
sqlalchemy.sql.schema.MetaData
All-in-one API class of GINO, providing several shortcuts.
This class is a subclass of SQLAlchemy
MetaData
, therefore its instances can be used as a normalMetaData
object, e.g. used in Alembic. In usual cases, you would want to define one globalGino
instance, usually under the name ofdb
, representing the database used in your application.You may define tables in the official way SQLAlchemy core recommended, but more often in GINO we define model classes with
db.Model
as their parent class to represent tables, for its objective interface and CRUD operations. Please read CRUD for more information.For convenience,
Gino
instance delegated all properties publicly exposed bysqlalchemy
, so that you can define tables / models without importingsqlalchemy
:id = db.Column(db.BigInteger(), primary_key=True)
Similar to
MetaData
, aGino
object can bind to aGinoEngine
instance, hereby allowing "implicit execution" through thegino
extension onExecutable
orSchemaItem
constructs:await User.query.gino.first() await db.gino.create_all()
Differently, GINO encourages the use of implicit execution and manages transactional context correctly.
Binding to a connection object is not supported.
To set a bind property, you can simply set your
GinoEngine
object ondb.bind
, or set it toNone
to unbind. However, the creation of engine usually happens at the same time. Therefore, GINO provided several convenient ways doing so:with_bind()
returning an asynchronous context manager:async with db.with_bind('postgresql://...') as engine:
set_bind()
andpop_bind()
:engine = await db.set_bind('postgresql://...') await db.pop_bind().close()
Directly
await
onGino
instance:db = await gino.Gino('postgresql://...') await db.pop_bind().close()
注解
SQLAlchemy allows creating the engine by:
metadata.bind = 'postgresql://...'
While in GINO this only sets a string to
bind
, because creating an engine requiresawait
, which is exactly whatset_bind()
does.At last,
Gino
delegates all query APIs on the boundGinoEngine
.- 参数
bind -- A
GinoEngine
instance to bind. Also accepts string orURL
, which will be passed tocreate_engine()
when thisGino
instance is awaited. Default isNone
.model_classes -- A
tuple
of base class and mixin classes to create theModel
class. Default is(CRUDModel, )
.query_ext -- Boolean value to control the installation of the
gino
extension onExecutable
for implicit execution. Default is to install (True
).schema_ext -- Boolean value to control the installation of the
gino
extension onSchemaItem
for implicit execution. Default is to install (True
).ext -- Boolean value to control the installation of the two
gino
extensions.False
for no extension at all, while it depends on the two individual switches when this is set toTrue
(default).kwargs -- Other arguments accepted by
MetaData
.
-
property
Model
¶ Declarative base class for models, subclass of
gino.declarative.Model
. Defining subclasses of this class will result new tables added to thisGino
metadata.
-
acquire
(*args, **kwargs)[源代码]¶ A delegate of
GinoEngine.acquire()
.
-
property
bind
¶ An
GinoEngine
to which thisGino
is bound.This is a simple property with no getter or setter hook - what you set is what you get. To achieve the same result as it is in SQLAlchemy - setting a string or
URL
and getting an engine instance, useset_bind()
(orawait
on thisGino
object after setting a string orURL
).
-
compile
(elem, *multiparams, **params)[源代码]¶ A delegate of
GinoEngine.compile()
.
-
iterate
(clause, *multiparams, **params)[源代码]¶ A delegate of
GinoEngine.iterate()
.
-
model_base_classes
= (<class 'gino.crud.CRUDModel'>,)¶ Overridable default model classes to build the
Model
.Default is
(CRUDModel, )
.
-
no_delegate
= {'create_engine', 'engine_from_config'}¶ A set of symbols from
sqlalchemy
which is not delegated byGino
.
-
pop_bind
()[源代码]¶ Unbind self, and return the bound engine.
This is usually used in a chained call to close the engine:
await db.pop_bind().close()
- 返回
GinoEngine
orNone
if self is not bound.
-
query_executor
¶ The overridable
gino
extension class onExecutable
.This class will be set as the getter method of the property
gino
onExecutable
and its subclasses, ifext
andquery_ext
arguments are bothTrue
. Default isGinoExecutor
.GinoExecutor
的别名
-
schema_visitor
¶
-
transaction
(*args, **kwargs)[源代码]¶ A delegate of
GinoEngine.transaction()
.
-
with_bind
(bind, loop=None, **kwargs)[源代码]¶ Shortcut for
set_bind()
andpop_bind()
plus closing engine.This method accepts the same arguments of
create_engine()
. This allows inline creating an engine and binding self on enter, and unbinding self and closing the engine on exit:async with db.with_bind('postgresql://...') as engine: # play with engine
- 返回
An asynchronous context manager.
-
class
gino.api.
GinoExecutor
(query)[源代码]¶ 基类:
object
The default
gino
extension onExecutable
constructs for implicit execution.Instances of this class are created when visiting the
gino
property ofExecutable
instances (also referred as queries or clause elements), for example:await User.query.gino.first()
This allows GINO to add the asynchronous query APIs (
all()
,first()
,one()
,one_or_none()
,scalar()
,status()
,iterate()
) to SQLAlchemy query clauses without messing up with existing synchronous ones. Calling these asynchronous query APIs has the same restriction - the relevant metadata (theGino
instance) must be bound to an engine, or anAttributeError
will be raised.注解
Executable clause elements that are completely irrelevant with any table - for example
db.select([db.text('now()')])
- has no metadata, hence no engine. Therefore, this will always fail:await db.select([db.text('now()')]).gino.scalar()
You should use
conn.scalar()
,engine.scalar()
or evendb.scalar()
in this case.-
iterate
(*multiparams, **params)[源代码]¶ Returns
engine.iterate()
with this query as the first argument, and other arguments followed, whereengine
is theGinoEngine
to which the metadata (Gino
) is bound, while metadata is found in this query.
-
load
(value)[源代码]¶ Shortcut to set execution option
loader
in a chaining call.For example to load
Book
instances with their authors:query = Book.join(User).select() books = await query.gino.load(Book.load(author=User)).all()
Read
execution_options()
for more information.
-
model
(model)[源代码]¶ Shortcut to set execution option
model
in a chaining call.Read
execution_options()
for more information.
-
property
query
¶ Get back the chained
Executable
.In a chained query calls, occasionally the previous query clause is needed after a
.gino.
chain, you can use.query.
to resume the chain back. For example:await User.query.gino.model(FOUser).query.where(...).gino.all()
-
return_model
(switch)[源代码]¶ Shortcut to set execution option
return_model
in a chaining call.Read
execution_options()
for more information.
-
timeout
(timeout)[源代码]¶ Shortcut to set execution option
timeout
in a chaining call.Read
execution_options()
for more information.
-
gino.crud module¶
-
class
gino.crud.
Alias
(model, *args, **kwargs)[源代码]¶ 基类:
object
Experimental proxy for table alias on model.
-
class
gino.crud.
CRUDModel
(**values)[源代码]¶ -
The base class for models with CRUD support.
Don't inherit from this class directly, because it has no metadata. Use
db.Model
instead.-
append_where_primary_key
(q)[源代码]¶ Append where clause to locate this model instance by primary on the given query, and return the new query.
This is mostly used internally in GINO, but also available for such usage:
await user.append_where_primary_key(User.query).gino.first()
which is identical to:
await user.query.gino.first()
0.7.6 版后已移除: Use
lookup()
instead.
-
coroutine
create
(bind=None, timeout=<object object>, **values)¶ This
create
behaves a bit different on model classes compared to model instances.On model classes,
create
will create a new model instance and insert it into database. On model instances,create
will just insert the instance into the database.Under the hood
create()
usesINSERT ... RETURNING ...
to create the new model instance and load it with database default data if not specified.Some examples:
user1 = await User.create(name='fantix', age=32) user2 = User(name='gino', age=42) await user2.create()
- 参数
bind -- A
GinoEngine
to execute theINSERT
statement with, orNone
(default) to use the bound engine on the metadata (Gino
).timeout -- Seconds to wait for the database to finish executing,
None
for wait forever. By default it will use thetimeout
execution option value if unspecified.values -- Keyword arguments are pairs of attribute names and their initial values. Only available when called on a model class.
- 返回
The instance of this model class (newly created or existing).
-
delete
¶ Similar to
update()
, thisdelete
is also different on model classes than on model instances.On model classes
delete
is an attribute of typeDelete
for massive deletes, for example:await User.delete.where(User.enabled.is_(False)).gino.status()
Similarly you can add a
returning()
clause to the query and it shall return the deleted rows as model objects.And on model instances,
delete()
is a method to remove the corresponding row in the database of this model instance. and returns the status returned from the database:print(await user.delete()) # e.g. prints DELETE 1
注解
delete()
only removes the row from database, it does not affect the current model instance.- 参数
bind -- An optional
GinoEngine
if current metadata (Gino
) has no bound engine, or specifying a differentGinoEngine
to execute theDELETE
.timeout -- Seconds to wait for the database to finish executing,
None
for wait forever. By default it will use thetimeout
execution option value if unspecified.
-
classmethod
distinct
(*columns)[源代码]¶ Experimental loader feature to yield only distinct instances by given columns.
-
classmethod coroutine
get
(ident, bind=None, timeout=<object object>)[源代码]¶ Get an instance of this model class by primary key.
For example:
user = await User.get(request.args.get('user_id'))
- 参数
ident -- Value of the primary key. For composite primary keys this should be a tuple of values for all keys in database order, or a dict of names (or position numbers in database order starting from zero) of all primary keys to their values.
bind -- A
GinoEngine
to execute theINSERT
statement with, orNone
(default) to use the bound engine on the metadata (Gino
).timeout -- Seconds to wait for the database to finish executing,
None
for wait forever. By default it will use thetimeout
execution option value if unspecified.
- 返回
An instance of this model class, or
None
if no such row.
-
classmethod
in_query
(query)[源代码]¶ Convenient method to get a Model object when using subqueries.
Though with filters and aggregations, subqueries often return same columns as the original table, but SQLAlchemy could not recognize them as the columns are in subqueries, so technically they're columns in the new "table".
With this method, the columns are loaded into the origintal models when being used in subquries. For example:
query = query.alias('users') MyUser = User.in_query(query) loader = MyUser.distinct(User1.id).load() users = await query.gino.load(loader).all()
-
classmethod
load
(*column_names, **relationships)[源代码]¶ Populates a
loader.Loader
instance to be used by theloader
execution option in order to customize the loading behavior to load specified fields into instances of this model.The basic usage of this method is to provide the
loader
execution option (if you are looking for reloading the instance from database, checkget()
orquery
) for a given query.This method takes both positional arguments and keyword arguments with very different meanings. The positional arguments should be column names as strings, specifying only these columns should be loaded into the model instance (other values are discarded even if they are retrieved from database). Meanwhile, the keyword arguments should be loaders for instance attributes. For example:
u = await User.query.gino.load(User.load('id', 'name')).first()
小技巧
gino.load
is a shortcut for setting the execution optionloader
.This will populate a
User
instance with onlyid
andname
values, all the rest are simplyNone
even if the query actually returned all the column values.q = User.join(Team).select() u = await q.gino.load(User.load(team=Team)).first()
This will load two instances of model
User
andTeam
, returning theUser
instance withu.team
set to theTeam
instance.Both positional and keyword arguments can be used ath the same time. If they are both omitted, like
Team.load()
, it is equivalent to justTeam
as a loader.Additionally, a
loader.Loader
instance can also be used to generate queries, as its structure is usually the same as the query:u = await User.load(team=Team).query.gino.first()
This generates a query like this:
SELECT users.xxx, ..., teams.xxx, ... FROM users LEFT JOIN teams ON ...
The
Loader
delegates attributes on thequery
, so.query
can be omitted. TheLEFT JOIN
is built-in behavior, while theON
clause is generated based on foreign key. If there is no foreign key, or the condition should be customized, you can use this:u = await User.load( team=Team.on(User.team_id == Team.id)).gino.first()
And you can use both
load()
andon()
at the same time in a chain, in whatever order suits you.
-
lookup
()[源代码]¶ Generate where-clause expression to locate this model instance.
By default this method uses current values of all primary keys, and you can override it to behave differently. Most instance-level CRUD operations depend on this method internally. Particularly while
lookup()
is called inupdate()
, the where condition is used inUpdateRequest.apply()
, so that queries likeUPDATE ... SET id = NEW WHERE id = OLD
could work correctly.- 返回
0.7.6 新版功能.
-
classmethod
on
(on_clause)[源代码]¶ Customize the on-clause for the auto-generated outer join query.
注解
This has no effect when provided as the
loader
execution option for a given query.参见
-
query
¶ Get a SQLAlchemy query clause of the table behind this model. This equals to
sqlalchemy.select([self.__table__])
. If this attribute is retrieved on a model instance, then a where clause to locate this instance by its primary key is appended to the returning query clause. This model type is set as the execution optionmodel
in the returning clause, so by default the query yields instances of this model instead of database rows.
-
select
()¶ Build a query to retrieve only specified columns from this table.
This method accepts positional string arguments as names of attributes to retrieve, and returns a
Select
for query. The returning query object is always set with two execution options:model
is set to this model typereturn_model
is set toFalse
So that by default it always return rows instead of model instances, while column types can be inferred correctly by the
model
option.For example:
async for row in User.select('id', 'name').gino.iterate(): print(row['id'], row['name'])
If
select()
is invoked on a model instance, then aWHERE
clause to locate this instance by its primary key is appended to the returning query clause. This is useful when you want to retrieve a latest value of a field on current model instance from database:db_age = await user.select('age').gino.scalar()
-
to_dict
()[源代码]¶ Convenient method to generate a dict from this model instance.
Keys will be attribute names, while values are loaded from memory (not from database). If there are
JSONProperty
attributes in this model, their source JSON field will not be included in the returning dict - instead the JSON attributes will be.参见
-
update
¶ This
update
behaves quite different on model classes rather than model instances.On model classes,
update
is an attribute of typeUpdate
for massive updates, for example:await User.update.values(enabled=True).where(...).gino.status()
Like
query
, the update query also has themodel
execution option of this model, so if you use thereturning()
clause, the query shall return model objects.However on model instances,
update()
is a method which accepts keyword arguments only and returns anUpdateRequest
to update this single model instance. The keyword arguments are pairs of attribute names and new values. This is the same asUpdateRequest.update()
, feel free to read more about it. A normal usage example would be like this:await user.update(name='new name', age=32).apply()
Here, the
await ... apply()
executes the actualUPDATE
SQL in the database, whileuser.update()
only makes changes in the memory, and collect all changes into anUpdateRequest
instance.
-
-
class
gino.crud.
UpdateRequest
(instance: gino.crud.CRUDModel)[源代码]¶ 基类:
object
A collection of attributes and their new values to update on one model instance.
UpdateRequest
instances are created byCRUDModel.update
, don't instantiate manually unless required. EveryUpdateRequest
instance is bound to one model instance, all updates are for that one specific model instance and its database row.-
update
(**values)[源代码]¶ Set given attributes on the bound model instance, and add them into the update collections for
apply()
.Given keyword-only arguments are pairs of attribute names and values to update. This is not a coroutine, calling
update()
will have instant effect on the bound model instance - its in-memory values will be updated immediately. Therefore this can be used individually as a shortcut to update several attributes in a batch:user.update(age=32, disabled=True)
update()
returnsself
for chaining calls to eitherapply()
or anotherupdate()
. If one attribute is updated several times by the sameUpdateRequest
, then only the last value is remembered forapply()
.Updated values can be SQLAlchemy expressions, for example an atomic increment for user balance looks like this:
await user.update(balance=User.balance + 100).apply()
注解
Expression values will not affect the in-memory attribute value on
update()
beforeapply()
, because it has no knowledge of the latest value in the database. Afterapply()
the new value will be automatically reloaded from database withRETURNING
clause.
-
gino.declarative module¶
-
gino.declarative.
declarative_base
(metadata, model_classes=(<class 'gino.declarative.Model'>, ), name='Model')[源代码]¶
-
gino.declarative.
declared_attr
(m)[源代码]¶ Mark a class-level method as a factory of attribute.
This is intended to be used as decorators on class-level methods of a
Model
class. When initializing the class as well as its subclasses, the decorated factory method will be called for each class, the returned result will be set on the class in place of the factory method under the same name.@declared_attr
is implemented differently thandeclared_attr
of SQLAlchemy, but they are both more often used on mixins to dynamically declare indices or constraints (also works for column and__table_args__
, or even normal class attributes):class TrackedMixin: created = db.Column(db.DateTime(timezone=True)) @db.declared_attr def unique_id(cls): return db.Column(db.Integer()) @db.declared_attr def unique_constraint(cls): return db.UniqueConstraint('unique_id') @db.declared_attr def poly(cls): if cls.__name__ == 'Thing': return db.Column(db.Unicode()) @db.declared_attr def __table_args__(cls): if cls.__name__ == 'Thing': return db.UniqueConstraint('poly'),
注解
This doesn't work if the model already had a
__table__
.
gino.engine module¶
-
class
gino.engine.
GinoConnection
(dialect, sa_conn, stack=None)[源代码]¶ 基类:
object
Represents an actual database connection.
This is the root of all query API like
all()
,first()
,one()
,one_or_none()
,scalar()
orstatus()
, those on engine or query are simply wrappers of methods in this class.Usually instances of this class are created by
GinoEngine.acquire()
.注解
GinoConnection
may refer to zero or one underlying database connection - when aGinoConnection
is acquired withlazy=True
, the underlying connection may still be in the pool, until a query API is called orget_raw_connection()
is called.Oppositely, one underlying database connection can be shared by many
GinoConnection
instances when they are acquired withreuse=True
. The actual database connection is only returned to the pool when the rootGinoConnection
is released. Read more inGinoEngine.acquire()
method.-
execution_options
(**opt)[源代码]¶ Set non-SQL options for the connection which take effect during execution.
This method returns a copy of this
GinoConnection
which references the same underlying database connection, but with the given execution options set on the copy. Therefore, it is a good practice to discard the copy immediately after use, for example:row = await conn.execution_options(model=None).first(User.query)
This is very much the same as SQLAlchemy
execution_options()
, it actually does pass the execution options to the underlying SQLAlchemyConnection
. Furthermore, GINO added a few execution options:- 参数
return_model -- Boolean to control whether the returning results should be loaded into model instances, where the model class is defined in another execution option
model
. Default isTrue
.model -- Specifies the type of model instance to create on return. This has no effect if
return_model
is set toFalse
. Usually in queries built by CRUD models, this execution option is automatically set. For now, GINO only supports loading each row into one type of model object, relationships are not supported. Please use multiple queries for that.None
for no postprocessing (default).timeout -- Seconds to wait for the query to finish.
None
for no time out (default).loader --
A loader expression to load the database rows into specified objective structure. It can be either:
A model class, so that the query will yield model instances of this class. It is your responsibility to make sure all the columns of this model is selected in the query.
A
Column
instance, so that each result will be only a single value of this column. Please note, if you want to achieve fetching the very first value, you should usefirst()
instead ofscalar()
. However, using directlyscalar()
is a more direct way.A tuple nesting more loader expressions recursively.
A
callable()
function that will be called for each row to fully customize the result. Two positional arguments will be passed to the function: the first is therow
instance, the second is a context object which is only present if nested elseNone
.A
Loader
instance directly.Anything else will be treated as literal values thus returned as whatever they are.
-
iterate
(clause, *multiparams, **params)[源代码]¶ Creates a server-side cursor in database for large query results.
Cursors must work within transactions:
async with conn.transaction(): async for user in conn.iterate(User.query): # handle each user without loading all users into memory
Alternatively, you can manually control how the cursor works:
async with conn.transaction(): cursor = await conn.iterate(User.query) user = await cursor.next() users = await cursor.many(10)
Read more about how
Cursor
works.Similarly, this method takes the same parameters as
all()
.
-
property
raw_connection
¶ The current underlying database connection instance, type depends on the dialect in use. May be
None
if self is a lazy connection.
-
schema_for_object
= <sqlalchemy.sql.schema._SchemaTranslateMap object>¶ A SQLAlchemy compatibility attribute, don't use it for now, it bites.
-
transaction
(*args, **kwargs)[源代码]¶ Starts a database transaction.
There are two ways using this method: managed as an asynchronous context manager:
async with conn.transaction() as tx: # run query in transaction
or manually awaited:
tx = await conn.transaction() try: # run query in transaction await tx.commit() except Exception: await tx.rollback() raise
Where the
tx
is an instance of theGinoTransaction
class, feel free to read more about it.In the first managed mode, the transaction is automatically committed on exiting the context block, or rolled back if an exception was raised which led to the exit of the context. In the second manual mode, you'll need to manually call the
commit()
orrollback()
methods on need.If this is a lazy connection, entering a transaction will cause a new database connection acquired if none was present.
Transactions may support nesting depending on the dialect in use. For example in asyncpg, starting a second transaction on the same connection will create a save point in the database.
For now, the parameters are directly passed to underlying database driver, read
asyncpg.connection.Connection.transaction()
for asyncpg.
-
-
class
gino.engine.
GinoEngine
(dialect, pool, loop, logging_name=None, echo=None, execution_options=None)[源代码]¶ 基类:
object
Connects a
Pool
andDialect
together to provide a source of database connectivity and behavior.A
GinoEngine
object is instantiated publicly using thegino.create_engine()
function ordb.set_bind()
method.-
acquire
(*, timeout=None, reuse=False, lazy=False, reusable=True)[源代码]¶ Acquire a connection from the pool.
There are two ways using this method - as an asynchronous context manager:
async with engine.acquire() as conn: # play with the connection
which will guarantee the connection is returned to the pool when leaving the
async with
block; or as a coroutine:conn = await engine.acquire() try: # play with the connection finally: await conn.release()
where the connection should be manually returned to the pool with
conn.release()
.Within the same context (usually the same
Task
, see also Transaction), a nesting acquire by default re- 参数
timeout -- Block up to
timeout
seconds until there is one free connection in the pool. Default isNone
- block forever until succeeded. This has no effect whenlazy=True
, and depends on the actual situation whenreuse=True
.reuse -- Reuse the latest reusable acquired connection (before it's returned to the pool) in current context if there is one, or borrow a new one if none present. Default is
False
for always borrow a new one. This is useful when you are in a nested method call series, wishing to use the same connection without passing it around as parameters. See also: Transaction. A reusing connection is not reusable even ifreusable=True
. If the reused connection happened to be a lazy one, then the reusing connection is lazy too.lazy -- Don't acquire the actual underlying connection yet - do it only when needed. Default is
False
for always do it immediately. This is useful before entering a code block which may or may not make use of a given connection object. Feeding in a lazy connection will save the borrow-return job if the connection is never used. If settingreuse=True
at the same time, then the reused connection - if any - applies the same laziness. For example, reusing a lazy connection withlazy=False
will cause the reused connection to acquire an underlying connection immediately.reusable -- Mark this connection as reusable or otherwise. This has no effect if it is a reusing connection. All reusable connections are placed in a stack, any reusing acquire operation will always reuse the top (latest) reusable connection. One reusable connection may be reused by several reusing connections - they all share one same underlying connection. Acquiring a connection with
reusable=False
andreusing=False
makes it a cleanly isolated connection which is only referenced once here.
- 返回
A
GinoConnection
object.
-
compile
(clause, *multiparams, **params)[源代码]¶ A shortcut for
compile()
on the dialect, returns raw SQL string and parameters according to the rules of the dialect.
-
connection_cls
¶ Customizes the connection class to use, default is
GinoConnection
.GinoConnection
的别名
-
property
current_connection
¶ Gets the most recently acquired reusable connection in the context.
None
if there is no such connection.
-
iterate
(clause, *multiparams, **params)[源代码]¶ Creates a server-side cursor in database for large query results.
This requires that there is a reusable connection in the current context, and an active transaction is present. Then its
GinoConnection.iterate()
is executed and returned.
-
property
raw_pool
¶ Read-only access to the underlying database connection pool instance. This depends on the actual dialect in use,
Pool
of asyncpg for example.
-
transaction
(*args, timeout=None, reuse=True, reusable=True, **kwargs)[源代码]¶ Borrows a new connection and starts a transaction with it.
Different to
GinoConnection.transaction()
, transaction on engine level supports only managed usage:async with engine.transaction() as tx: # play with transaction here
Where the implicitly acquired connection is available as
tx.connection
.By default,
transaction()
acquires connection withreuse=True
andreusable=True
, that means it by default tries to create a nested transaction instead of a new transaction on a new connection. You can change the default behavior by setting these two arguments.The other arguments are the same as
transaction()
on connection.- 返回
A asynchronous context manager that yields a
GinoTransaction
-
update_execution_options
(**opt)[源代码]¶ Update the default execution_options dictionary of this
GinoEngine
.
-
gino.exceptions module¶
gino.json_support module¶
gino.loader module¶
gino.schema module¶
-
class
gino.schema.
AsyncSchemaDropper
(dialect, connection, checkfirst=False, tables=None, **kwargs)[源代码]¶ 基类:
gino.schema.AsyncVisitor
,sqlalchemy.sql.ddl.SchemaDropper
-
class
gino.schema.
AsyncSchemaGenerator
(dialect, connection, checkfirst=False, tables=None, **kwargs)[源代码]¶ 基类:
gino.schema.AsyncVisitor
,sqlalchemy.sql.ddl.SchemaGenerator
gino.strategies module¶
gino.transaction module¶
-
class
gino.transaction.
GinoTransaction
(conn, args, kwargs)[源代码]¶ 基类:
object
Represents an underlying database transaction and its connection, offering methods to manage this transaction.
GinoTransaction
is supposed to be created by eithergino.engine.GinoConnection.transaction()
, orgino.engine.GinoEngine.transaction()
, orgino.api.Gino.transaction()
, shown as follows:async with db.transaction() as tx: ... async with engine.transaction() as tx: ... async with conn.transaction() as tx: ... tx = await conn.transaction() try: ... await tx.commit() except Exception: await tx.rollback() raise
When in use with asynchronous context manager,
GinoTransaction
will be in managed mode, while the last example withawait
will put theGinoTransaction
in manual mode where you have to call thecommit()
orrollback()
to manually close the transaction.In managed mode the transaction will be automatically committed or rolled back on exiting the
async with
block depending on whether there is an exception or not. Meanwhile, you can explicitly exit the transaction early byraise_commit()
orraise_rollback()
which will raise an internal exception managed by the asynchronous context manager and interpreted as a commit or rollback action. In a nested transaction situation, the two exit-early methods always close up the very transaction which the two methods are referenced upon - all children transactions are either committed or rolled back correspondingly, while no parent transaction was ever touched. For example:async with db.transaction() as tx1: async with db.transaction() as tx2: async with db.transaction() as tx3: tx2.raise_rollback() # Won't reach here # Won't reach here # Continues here with tx1, with both tx2 and tx3 rolled back. # For PostgreSQL, tx1 can still be committed successfully because # tx2 and tx3 are just SAVEPOINTs in transaction tx1
小技巧
The internal exception raised from
raise_commit()
andraise_rollback()
is a subclass ofBaseException
, so normaltry ... except Exception:
can't trap the commit or rollback.-
property
connection
¶ Accesses to the
GinoConnection
of this transaction. This is useful if when the transaction is started fromdb
orengine
where the connection is implicitly acquired for you together with the transaction.
-
raise_commit
()[源代码]¶ Only available in managed mode: skip rest of the code in this transaction and commit immediately by raising an internal exception, which will be caught and handled by the asynchronous context manager:
async with db.transaction() as tx: await user.update(age=64).apply() tx.raise_commit() await user.update(age=32).apply() # won't reach here assert user.age == 64 # no exception raised before
-
raise_rollback
()[源代码]¶ Only available in managed mode: skip rest of the code in this transaction and rollback immediately by raising an internal exception, which will be caught and handled by the asynchronous context manager:
assert user.age == 64 # assumption async with db.transaction() as tx: await user.update(age=32).apply() tx.raise_rollback() await user.update(age=128).apply() # won't reach here assert user.age == 64 # no exception raised before
-
property
raw_transaction
¶ Accesses to the underlying transaction object, whose type depends on the dialect in use.
-
property
贡献¶
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
您可以通过多种方式做出贡献:
Types of Contributions¶
报告错误¶
Report bugs at https://github.com/fantix/gino/issues.
如果您要报告错误,请包括:
您的操作系统名称和版本。
有关本地设置的任何详细信息都可能有助于排除故障。
重现错误的详细步骤。
修复错误¶
在GitHub issues 查找错误。任何人都可以将标记为“bug”和“help wanted”并且是打开状态的问题修复。
实现功能¶
在GitHub issues 查找功能。任何人都可以将标记为“增强”和“需要帮助”的打开状态的需求实现。
写文档¶
GINO需要更多的使用文档,无论是作为官方GINO文档的一部分,还是文档字符串,甚至是博客,文章等等。
提交反馈¶
发送反馈的最佳方式是在https://github.com/fantix/gino/issues提交问题。
如果您要提出一项功能:
详细解释它是如何工作的。
为了更容易实现,请保持范围尽可能窄。
Remember that this is a volunteer-driven project, and that contributions are welcome :)
Get Started!¶
Ready to contribute? Here's how to set up gino for local development.
Fork the gino repo on GitHub.
Clone your fork locally:
$ git clone git@github.com:your_name_here/gino.git
Create a branch for local development:
$ cd gino/ $ git checkout -b name-of-your-bugfix-or-feature
Now you can make your changes locally.
Create virtual environment. Example for virtualenvwrapper:
$ mkvirtualenv gino
Activate the environment and install requirements:
$ pip install -r requirements_dev.txt
When you're done making changes, check that your changes pass syntax checks:
$ flake8 gino tests
7. And tests (including other Python versions with tox). For tests you you will need running database server (see "Tips" section below for configuration details):
$ pytest tests
$ tox
For docs run:
$ make docs
It will build and open up docs in your browser.
Commit your changes and push your branch to GitHub:
$ git add . $ git commit -m "Your detailed description of your changes." $ git push origin name-of-your-bugfix-or-feature
Submit a pull request through the GitHub website.
Pull Request Guidelines¶
Before you submit a pull request, check that it meets these guidelines:
The pull request should include tests.
If the pull request adds functionality, the docs should be updated. Put your new functionality into a function with a docstring, and add the feature to the list in README.rst.
The pull request should work for Python 3.6. Check https://travis-ci.org/fantix/gino/pull_requests and make sure that the tests pass for all supported Python versions.
Tips¶
To run a subset of tests:
$ py.test -svx tests.test_gino
By default the tests run against a default installed postgres database. If you wish to run against a separate database for the tests you can do this by first creating a new database and user using 'psql' or similar:
CREATE ROLE gino WITH LOGIN ENCRYPTED PASSWORD 'gino';
CREATE DATABASE gino WITH OWNER = gino;
Then run the tests like so:
$ export DB_USER=gino DB_PASS=gino DB_NAME=gino
$ py.test
Here is an example for db server in docker. Some tests require ssl so you will need to run postgres with ssl enabled. Terminal 1 (server):
$ openssl req -new -text -passout pass:abcd -subj /CN=localhost -out server.req -keyout privkey.pem
$ openssl rsa -in privkey.pem -passin pass:abcd -out server.key
$ openssl req -x509 -in server.req -text -key server.key -out server.crt
$ chmod 600 server.key
$ docker run --name gino_db --rm -it -p 5433:5432 -v "$(pwd)/server.crt:/var/lib/postgresql/server.crt:ro" -v "$(pwd)/server.key:/var/lib/postgresql/server.key:ro" postgres:12-alpine -c ssl=on -c ssl_cert_file=/var/lib/postgresql/server.crt -c ssl_key_file=/var/lib/postgresql/server.key
Terminal 2 (client):
$ export DB_USER=gino DB_PASS=gino DB_NAME=gino DB_PORT=5433
$ docker exec gino_db psql -U postgres -c "CREATE ROLE $DB_USER WITH LOGIN ENCRYPTED PASSWORD '$DB_PASS'"
$ docker exec gino_db psql -U postgres -c "CREATE DATABASE $DB_NAME WITH OWNER = $DB_USER;"
$ pytest tests/test_aiohttp.py
制作人员¶
主要制作人¶
Maintainers¶
王沐雨 <wwwjfy@gmail.com>
贡献者¶
王林基 <qdzzyb2015@gmail.com>
李冰含 <lbhsot@163.com>
Vladimir Goncharov <dev.zelta@gmail.com>
Kinware <oss@kinware.com>
Kentoseth <kentoseth@devcroo.com>
Ádám Barancsuk <adam.barancsuk@gmail.com>
Sergey Kovalev <s_kovalev@wargaming.net>
jonahfang
Yurii Shtrikker <nbb.unitto@gmail.com>
Nicolas Crocfer <ncrocfer@gmail.com>
Denys Badzo <badzyo360@gmail.com>
Pavol Vargovcik <pavol.vargovcik@gmail.com>
Mykyta Holubakha <hilobakho@gmail.com>
Jekel <jekel@psyspace.ru>
Martin Zaťko <martin.zatko@kiwi.com>
Pascal van Kooten <kootenpv@gmail.com>
Michał Dziewulski <michal@dziewulski.pl>
Simeon J Morgan <smorgan@digitalfeed.net>
Julio Lacerda <julio.castro.lacerda@gmail.com>
qulaz <qulazwork@gmail.com>
Jim O'Brien <jim@greenfield.fm>
Ilaï Deutel <ilai-deutel@users.noreply.github.com>
Roald Storm <roaldstorm@gmail.com>
Special thanks to my wife Daisy and her outsourcing company DecentFoX Studio, for offering me the opportunity to build this project. We are open for global software project outsourcing on Python, iOS and Android development.
History¶
GINO 0.8¶
This is also version 1.0 release candidate.
Migrating to GINO 0.8¶
1. contextvars¶
We introduced aiocontextvars 0.2.0 which is revamped to be compatible with
PEP-567 without manual interference by a few simple implicit patches. To
upgrade to GINO 0.8, please remove the enable_inherit()
or
disable_inherit()
calls, because they are the default behavior now thus
no longer exist. However, you'll need to confirm that the event loop in use is
always created after importing gino
or aiocontextvars
, or the patch
won't work correctly.
There is nothing to worry about in Python 3.7.
2. none_as_none¶
When GINO tries to load a row with all NULL
values into an instance, it
will now by default return None
instead of an instance with all None
attributes. To recover the default behavior of 0.7, please specify
none_as_none(False)
in affected model loader.
This is especially applicable to relationship sub-loaders - if the sub-loader
found it all NULL
, no instance will be set to parent instance. For
example:
child = await Child.load(parent=Parent).query.gino.first()
If child.parent_id
is NULL
in database, then the child
instance
won't be called with any setattr(child, 'parent', ...)
at all. (If you need
child.parent == None
in this case, consider setting default value
parent = None
in child model.)
Please note, it is deprecated to disable none_as_none
, and disabling will
be removed in GINO 1.0.
0.8.5 (2019-11-19)¶
Improved support for
__tablename__
indeclared_attr
(Contributed by Roald Storm in #592)
0.8.4 (2019-11-09)¶
Better loader support for models in subqueries (#573 #585)
Allowed
__tablename__
to be adeclared_attr
(#579 #582)Fixed Sanic 19.9.0 compatibility (#569)
Added one() and one_or_none() (Contributed by Ilaï Deutel in #577)
Improved Starleet extension compatibility (Contributed by Jim O'Brien in #538)
Fixed Starlette connection release during exceptions issue (Contributed by qulaz in #533)
Fixed server event compatibility with Sanic 19.6.2 (Contributed by Julio Lacerda in #520)
Fixed Grammar (Contributed by Simeon J Morgan in #504)
0.8.3 (2019-06-06)¶
Fixed deprecated warnings in asyncpg dialect and aiohttp (#425)
Customizable db attribute name in aiohttp app instance (#457)
Added Starlette support (#486)
0.8.2 (2019-03-07)¶
Added exception for unknown JSON properties (#406 #408)
Supported Quart 0.7 (#411)
Accepted kwargs for db init in extensions (#407 #427)
Added custom config parameter to aiohttp middleware (Contributed by Michał Dziewulski in #440)
Added NullPool (#437 #441)
Unpinned dependency versions (#447)
Added support for SQLAlchemy 1.3 (#433 #451)
0.8.1 (2018-12-08)¶
Alias supported
Label
(#365)Docs update (#308, 4c59ad, #401 by Pascal van Kooten)
Version requirement for SQLAlchemy is updated to
>=1.2
(#378 #382)Added option for SSL in aiohttp extension (Contributed by Martin Zaťko in #387 #393) * And all other extensions (#395)
Supported Tornado 5 (#396, also thanks to Vladimir Goncharov)
Fixed custom JSON/JSONB type support (#402 #403)
(Most fixes done by Tony Wang)
0.8.0 (2018-10-16)¶
Welcome Tony Wang to the maintenance team (#335)
Allowed custom column names (#261 #297)
Allowed column instance in
model.load()
(Contributed by Jekel in #323)[Breaking] Upgraded to aiocontextvars 0.2.0 (#333)
Fixed bug that the same empty stack is shared between sub-tasks (#313 #334)
[Breaking] Made
none_as_none()
the default behavior (#351)Bug fixes and docs update
GINO 0.7¶
This is also version 1.0 beta 3.
0.7.7 (2018-12-08)¶
Backported fix for custom JSON/JSONB type support (#402 #403)
0.7.6 (2018-08-26)¶
Updated library support (Contributed by Tony Wang in #275 #309)
Added
none_as_none()
(#281 #282)Added
ARRAY
alias in asyncpg dialect module (Contributed by Mykyta Holubakha in #289)Added
Model.lookup()
to prevent updating whole table without primary key (#287 #288)Added
DB_ECHO
in extension options (Contributed by Mykyta Holubakha in #298)Fixed broken JSON/JSONB result processor since 0.5.8 (Contributed by Tony Wang in #291 #305)
Removed bad rollback after a failing commit (Contributed by Tony Wang in #302 #304)
Fixed to raise
UninitializedError
if bind isNone
(Contributed by Tony Wang in #307 #310)
0.7.5 (2018-07-26)¶
Added friendly error message when using abstract models by mistake (#224)
Supported Python 3.7 (Contributed by Tony Wang in #265)
Updated documentation
Fixed a bug in TupleLoader (Contributed by Pavol Vargovcik in #279 #280)
0.7.4 (2018-06-10)¶
Added aiocontextvars as required dependency required for Python 3.5 and 3.6 (#228)
Added Quart support (#213)
Fixed Tornado options parsing (#231)
Improved coding style and test coverage
0.7.3 (2018-05-19)¶
Fix for failing binary type (#225)
0.7.2 (2018-05-15)¶
Added prepared statement support (#14)
Added dsn in extension config (Contributed by Yurii Shtrikker in #215)
0.7.1 (2018-05-03)¶
Added support for inline model constraints (Contributed by Kinware in #198)
Added docs and tests for using SSL (#202)
Added
declared_attr
(#204)Allowed
ModelLoader
passively load partial model (#216)
0.7.0 (2018-04-18)¶
Added Python 3.5 support (#187)
Added support to use
dict
as ident forModel.get
(#192)Added result loader (partial relationship support) (#13)
Added documentation on relationship and transaction (#146)
GINO 0.6¶
This is also version 1.0 beta 2.
Migrating to GINO 0.6¶
1. Task Local¶
We created a new Python package aiocontextvars from previous local.py
. If
you made use of the task local features, you should install this package.
Previous gino.enable_task_local()
and gino.disable_task_local()
are
replaced by aiocontextvars.enable_inherit()
and
aiocontextvars.disable_inherit()
. However in GINO 0.5 they controls the
whole task local feature switch, while aiocontextvars by default offers task
local even without enable_inherit()
, which controls whether the local
storage should be passed between chained tasks. When enabled, it behaves the
same as enabled in 0.5, but you cannot completely turn off the task local
feature while aiocontextvars is installed.
There is no gino.get_local()
and gino.reset_local()
relevant in
aiocontextvars. The similar thing is aiocontextvars.ContextVar
instance
through its get()
, set()
and delete()
methods.
Previous gino.is_local_root()
is now
not aiocontextvars.Context.current().inherited
.
2. Engine¶
GINO 0.6 hides asyncpg.Pool
behind the new SQLAlchemy-alike
gino.GinoEngine
. Instead of doing this in 0.5:
async with db.create_pool('postgresql://...') as pool:
# your code here
You should change it to this in 0.6:
async with db.with_bind('postgresql://...') as engine:
# your code here
This equals to:
engine = await gino.create_engine('postgresql://...')
db.bind = engine
try:
# your code here
finally:
db.bind = None
await engine.close()
Or:
engine = await db.set_bind('postgresql://...')
try:
# your code here
finally:
await db.pop_bind().close()
Or even this:
db = await gino.Gino('postgresql://...')
try:
# your code here
finally:
await db.pop_bind().close()
Choose whichever suits you the best.
Obviously GinoEngine
doesn't provide asyncpg.Pool
methods directly any
longer, but you can get the underlying asyncpg.Pool
object through
engine.raw_pool
property.
GinoPool.get_current_connection()
is now changed to current_connection
property on GinoEngine
instances to support multiple engines.
GinoPool.execution_option
is gone, instead update_execution_options()
on GinoEngine
instance is available.
GinoPool().metadata
is gone, dialect
is still available.
GinoPool.release()
is removed in GinoEngine
and Gino
, the
release()
method on GinoConnection
object should be used instead.
These methods exist both in 0.5 GinoPool
and 0.6 GinoEngine
:
close()
, acquire()
, all()
, first()
, scalar()
, status()
.
3. GinoConnection¶
Similarly, GinoConnection
in 0.6 is no longer a subclass of
asyncpg.Connection
, instead it has a asyncpg.Connection
instance,
accessable through GinoConnection.raw_connection
property.
GinoConnection.metadata
is deleted in 0.6, while dialect
remained.
GinoConnection.execution_options()
is changed from a mutable dict in 0.5 to
a method returning a copy of current connection with the new options, the same
as SQLAlchemy behavior.
GinoConnection.release()
is still present, but its default behavior has
been changed to permanently release this connection. You should add argument
permanent=False
to remain its previous behavior.
And all()
, first()
, scalar()
, status()
, iterate()
,
transaction()
remained in 0.6.
4. Query API¶
All five query APIs all()
, first()
, scalar()
, status()
,
iterate()
now accept the same parameters as SQLAlchemy execute()
,
meaning they accept raw SQL text, or multiple sets of parameters for
"executemany". Please note, if the parameters are recognized as "executemany",
none of the methods will return anything. Meanwhile, they no longer accept the
parameter bind
if they did. Just use the API on the GinoEngine
or
GinoConnection
object instead.
5. Transaction¶
Transaction interface is rewritten. Now in 0.6, a GinoTransaction
object is
provided consistently from all 3 methods:
async with db.transaction() as tx:
# within transaction
async with engine.transaction() as tx:
# within transaction
async with engine.acquire() as conn:
async with conn.transaction() as tx:
# within transaction
And different usage with await
:
tx = await db.transaction()
try:
# within transaction
await tx.commit()
except:
await tx.rollback()
raise
The GinoConnection
object is available at tx.connection
, while
underlying transaction object from database driver is available at
tx.transaction
- for asyncpg it is an asyncpg.transaction.Transaction
object.
0.6.6 (2018-05-18)¶
Backported a fix for failing binary type (#225)
0.6.5 (2018-04-18)¶
Abandoned 0.6.4 and keep 0.6.x stable
Backported doc for transaction
0.6.4 (2018-04-16)¶
Abandoned version, please use 0.7.0 instead.
0.6.3 (2018-04-08)¶
Added aiohttp support
Added support for calling
create()
on model instances (Contributed by Kinware in #178 #180)Fixed
get()
by string, and misc environment issues (Contributed by Tony Wang in #191 193 #183 #184)
0.6.2 (2018-03-24)¶
Fixed SQLAlchemy prefetch issue (#141)
Fixed issue that mixin class on Model not working (#174)
Added more documentation (Thanks Olaf Conradi for reviewing)
0.6.1 (2018-03-18)¶
Fixed
create
anddrop
forEnum
type (#160)A bit more documentation (#159)
0.6.0 (2018-03-14)¶
[Breaking] API Refactored,
Pool
replaced withEngine
New API
Engine
replaced asyncpgPool
(#59)Supported different dialects, theoretically
Used aiocontextvars instead of builtin task local (#89)
[Breaking] Fixed query API with
multiparams
(executemany) to return correctly (#20)[Breaking] The query methods no longer accept the parameter
bind
[Breaking]
Gino
no longer exposespostgresql
typesAdded
echo
on engine (#142)Added tests to cover 80% of code
Added
gino
extension onSchemaItem
forcreate_all
and so on (#76 #106)Added
gino
extension on model classes forcreate()
ordrop()
Added
_update_request_cls
onCRUDModel
(#147)Rewrote the documentation (#146)
GINO 0.5¶
This is also version 1.0 beta 1.
0.5.8 (2018-02-14)¶
Preparing for 0.6.0 which will be a breaking release
Fixed wrong value of
Enum
in creation (Contributed by Sergey Kovalev in #126)
0.5.7 (2017-11-24)¶
This is an emergency fix for 0.5.6.
Fixed broken lazy connection (Contributed by Ádám Barancsuk in #114)
Added
Model.outerjoin
0.5.6 (2017-11-23)¶
Changed to use unnamed statement when possible (#80 #90)
Added more example (Contributed by Kentoseth in #109)
Added
Model.join
and madeModel
selectable (Contributed by Ádám Barancsuk in #112 #113)
0.5.5 (2017-10-18)¶
Ensured clean connection if transaction acquire fails (Contributed by Vladimir Goncharov in #87)
Added ability to reset local storage (#84)
Fixed bug in JSON property update
Added update chaining feature
0.5.4 (2017-10-04)¶
Updated example (Contributed by Kinware in #75)
Added
Model.insert
(Contributed by Neal Wang in #63)Fixed issue that non-lazy acquiring fails dirty (#79)
0.5.3 (2017-09-23)¶
Fixed
no module named cutils
error (Contributed by Vladimir Goncharov in #73)
0.5.2 (2017-09-10)¶
Added missing driver name on dialect (#67)
Fixed dialect to support native decimal type (#67)
0.5.1 (2017-09-09)¶
This is an emergency fix for 0.5.0.
Reverted the extension, back to pure Python (#60)
Used SQLAlchemy
RowProxy
Added
first_or_404
Fixed bug that
GinoPool
cannot be inherited
0.5.0 (2017-09-03)¶
[Breaking] Internal refactor: extracted and isolated a few modules, partially rewritten
Extracted CRUD operations
Core operations are moved to
dialect
and execution contextRemoved
guess_model
, switched to explicit execution optionsTurned
timeout
parameter to an execution optionExtracted
pool
,connection
andapi
fromasyncpg_delegate
Added support for SQLAlchemy execution options, and a few custom options
[Breaking] Made Model.select return rows by default (#39)
Moved get_or_404 to extensions (#38)
Added iterator on model classes (#43)
Added Tornado extension (Contributed by Vladimir Goncharov)
Added Model.to_dict (#47)
Added an extension module to update asyncpg.Record with processed results
Early Development Releases¶
Considered as alpha releases.
0.4.1 (2017-08-20)¶
Support
select
on model instance
0.4.0 (2017-08-15)¶
Made
get_or_404
more friendly when Sanic is missing (Contributed by Neal Wang in #23 #31)Delegated
sqlalchemy.__all__
(Contributed by Neal Wang in #10 #33)[Breaking] Rewrote JSON/JSONB support (#29)
Added
lazy
parameter ondb.acquire
(Contributed by Binghan Li in #32)Added Sanic integration (Contributed by Binghan Li, Tony Wang in #30 #32 #34)
Fixed
iterate
API to be compatible with asyncpg (#32)Unified exceptions
[Breaking] Changed
update
API (#29)Bug fixes
0.3.0 (2017-08-07)¶
Supported
__table_args__
(#12)Introduced task local to manage connection in context (#19)
Added
query.gino
extension for in-place executionRefreshed README (#3)
Adopted PEP 487 (Contributed by Tony Wang in #17 #27)
Used
weakref
on__model__
of table and query (Contributed by Tony Wang)Delegated asyncpg
timeout
parameter (Contributed by Neal Wang in #16 #22)
0.2.3 (2017-08-04)¶
Supported any primary key (Contributed by Tony Wang in #11)
0.2.2 (2017-08-02)¶
Supported SQLAlchemy result processor
Added rich support on JSON/JSONB
Bug fixes
0.2.1 (2017-07-28)¶
Added
update
anddelete
API
0.2.0 (2017-07-28)¶
Changed API, no longer reuses asyncpg API
0.1.1 (2017-07-25)¶
Added
db.bind
API changed: parameter
conn
renamed to optionalbind
Delegated asyncpg Pool with
db.create_pool
Internal enhancement and bug fixes
0.1.0 (2017-07-21)¶
First release on PyPI.