Connection Pooling

A connection pool is a standard technique used to maintainlong running connections in memory for efficient re-use,as well as to providemanagement for the total number of connections an applicationmight use simultaneously.

Particularly forserver-side web applications, a connection pool is the standard way tomaintain a “pool” of active database connections in memory which arereused across requests.

SQLAlchemy includes several connection pool implementationswhich integrate with the Engine. They can also be useddirectly for applications that want to add pooling to an otherwiseplain DBAPI approach.

Connection Pool Configuration

The Engine returned by thecreate_engine() function in most cases has a QueuePoolintegrated, pre-configured with reasonable pooling defaults. Ifyou’re reading this section only to learn how to enable pooling - congratulations!You’re already done.

The most common QueuePool tuning parameters can be passeddirectly to create_engine() as keyword arguments:pool_size, max_overflow, pool_recycle andpool_timeout. For example:

  1. engine = create_engine('postgresql://me@localhost/mydb',
  2. pool_size=20, max_overflow=0)

In the case of SQLite, the SingletonThreadPool orNullPool are selected by the dialect to providegreater compatibility with SQLite’s threading and lockingmodel, as well as to provide a reasonable default behaviorto SQLite “memory” databases, which maintain their entiredataset within the scope of a single connection.

All SQLAlchemy pool implementations have in commonthat none of them “pre create” connections - all implementations waituntil first use before creating a connection. At that point, ifno additional concurrent checkout requests for more connectionsare made, no additional connections are created. This is why it’s perfectlyfine for create_engine() to default to using a QueuePoolof size five without regard to whether or not the application really needs five connectionsqueued up - the pool would only grow to that size if the applicationactually used five connections concurrently, in which case the usage of asmall pool is an entirely appropriate default behavior.

Switching Pool Implementations

The usual way to use a different kind of pool with create_engine()is to use the poolclass argument. This argument accepts a classimported from the sqlalchemy.pool module, and handles the detailsof building the pool for you. Common options include specifyingQueuePool with SQLite:

  1. from sqlalchemy.pool import QueuePool
  2. engine = create_engine('sqlite:///file.db', poolclass=QueuePool)

Disabling pooling using NullPool:

  1. from sqlalchemy.pool import NullPool
  2. engine = create_engine(
  3. 'postgresql+psycopg2://scott:tiger@localhost/test',
  4. poolclass=NullPool)

Using a Custom Connection Function

All Pool classes accept an argument creator which isa callable that creates a new connection. create_engine()accepts this function to pass onto the pool via an argument ofthe same name:

  1. import sqlalchemy.pool as pool
  2. import psycopg2
  3.  
  4. def getconn():
  5. c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
  6. # do things with 'c' to set up
  7. return c
  8.  
  9. engine = create_engine('postgresql+psycopg2://', creator=getconn)

For most “initialize on connection” routines, it’s more convenientto use the PoolEvents event hooks, so that the usual URL argument tocreate_engine() is still usable. creator is there asa last resort for when a DBAPI has some form of connectthat is not at all supported by SQLAlchemy.

Constructing a Pool

To use a Pool by itself, the creator function isthe only argument that’s required and is passed first, followedby any additional options:

  1. import sqlalchemy.pool as pool
  2. import psycopg2
  3.  
  4. def getconn():
  5. c = psycopg2.connect(username='ed', host='127.0.0.1', dbname='test')
  6. return c
  7.  
  8. mypool = pool.QueuePool(getconn, max_overflow=10, pool_size=5)

DBAPI connections can then be procured from the pool using the Pool.connect()function. The return value of this method is a DBAPI connection that’s containedwithin a transparent proxy:

  1. # get a connection
  2. conn = mypool.connect()
  3.  
  4. # use it
  5. cursor = conn.cursor()
  6. cursor.execute("select foo")

The purpose of the transparent proxy is to intercept the close() call,such that instead of the DBAPI connection being closed, it is returned to thepool:

  1. # "close" the connection. Returns
  2. # it to the pool.
  3. conn.close()

The proxy also returns its contained DBAPI connection to the poolwhen it is garbage collected,though it’s not deterministic in Python that this occurs immediately (thoughit is typical with cPython).

The close() step also performs the important step of calling therollback() method of the DBAPI connection. This is so that anyexisting transaction on the connection is removed, not only ensuringthat no existing state remains on next usage, but also so that tableand row locks are released as well as that any isolated data snapshotsare removed. This behavior can be disabled using the reset_on_returnoption of Pool.

A particular pre-created Pool can be shared with one or moreengines by passing it to the pool argument of create_engine():

  1. e = create_engine('postgresql://', pool=mypool)

Pool Events

Connection pools support an event interface that allows hooks to executeupon first connect, upon each new connection, and upon checkout andcheckin of connections. See PoolEvents for details.

Dealing with Disconnects

The connection pool has the ability to refresh individual connections as well asits entire set of connections, setting the previously pooled connections as“invalid”. A common use case is allow the connection pool to gracefully recoverwhen the database server has been restarted, and all previously established connectionsare no longer functional. There are two approaches to this.

Disconnect Handling - Pessimistic

The pessimistic approach refers to emitting a test statement on the SQLconnection at the start of each connection pool checkout, to testthat the database connection is still viable. Typically, thisis a simple statement like “SELECT 1”, but may also make use of someDBAPI-specific method to test the connection for liveness.

The approach adds a small bit of overhead to the connection checkout process,however is otherwise the most simple and reliable approach to completelyeliminating database errors due to stale pooled connections. The callingapplication does not need to be concerned about organizing operationsto be able to recover from stale connections checked out from the pool.

It is critical to note that the pre-ping approach does not accommodate forconnections dropped in the middle of transactions or other SQL operations.If the database becomes unavailable while a transaction is in progress, thetransaction will be lost and the database error will be raised. Whilethe Connection object will detect a “disconnect” situation andrecycle the connection as well as invalidate the rest of the connection poolwhen this condition occurs,the individual operation where the exception was raised will be lost, and it’sup to the application to either abandonthe operation, or retry the whole transaction again.

Pessimistic testing of connections upon checkout is achievable byusing the Pool.pre_ping argument, available from create_engine()via the create_engine.pool_pre_ping argument:

  1. engine = create_engine("mysql+pymysql://user:pw@host/db", pool_pre_ping=True)

The “pre ping” feature will normally emit SQL equivalent to “SELECT 1” each time aconnection is checked out from the pool; if an error is raised that is detectedas a “disconnect” situation, the connection will be immediately recycled, andall other pooled connections older than the current time are invalidated, sothat the next time they are checked out, they will also be recycled before use.

If the database is still not available when “pre ping” runs, then the initialconnect will fail and the error for failure to connect will be propagatednormally. In the uncommon situation that the database is available forconnections, but is not able to respond to a “ping”, the “pre_ping” will try upto three times before giving up, propagating the database error last received.

Note

the “SELECT 1” emitted by “pre-ping” is invoked within the scopeof the connection pool / dialect, using a very short codepath for minimalPython latency. As such, this statement is not logged in the SQLecho output, and will not show up in SQLAlchemy’s engine logging.

New in version 1.2: Added “pre-ping” capability to the Poolclass.

Custom / Legacy Pessimistic Ping

Before create_engine.pool_pre_ping was added, the “pre-ping”approach historically has been performed manually usingthe ConnectionEvents.engine_connect() engine event.The most common recipe for this is below, for referencepurposes in case an application is already using such a recipe, or specialbehaviors are needed:

  1. from sqlalchemy import exc
  2. from sqlalchemy import event
  3. from sqlalchemy import select
  4.  
  5. some_engine = create_engine(...)
  6.  
  7. @event.listens_for(some_engine, "engine_connect")
  8. def ping_connection(connection, branch):
  9. if branch:
  10. # "branch" refers to a sub-connection of a connection,
  11. # we don't want to bother pinging on these.
  12. return
  13.  
  14. # turn off "close with result". This flag is only used with
  15. # "connectionless" execution, otherwise will be False in any case
  16. save_should_close_with_result = connection.should_close_with_result
  17. connection.should_close_with_result = False
  18.  
  19. try:
  20. # run a SELECT 1. use a core select() so that
  21. # the SELECT of a scalar value without a table is
  22. # appropriately formatted for the backend
  23. connection.scalar(select([1]))
  24. except exc.DBAPIError as err:
  25. # catch SQLAlchemy's DBAPIError, which is a wrapper
  26. # for the DBAPI's exception. It includes a .connection_invalidated
  27. # attribute which specifies if this connection is a "disconnect"
  28. # condition, which is based on inspection of the original exception
  29. # by the dialect in use.
  30. if err.connection_invalidated:
  31. # run the same SELECT again - the connection will re-validate
  32. # itself and establish a new connection. The disconnect detection
  33. # here also causes the whole connection pool to be invalidated
  34. # so that all stale connections are discarded.
  35. connection.scalar(select([1]))
  36. else:
  37. raise
  38. finally:
  39. # restore "close with result"
  40. connection.should_close_with_result = save_should_close_with_result

The above recipe has the advantage that we are making use of SQLAlchemy’sfacilities for detecting those DBAPI exceptions that are known to indicatea “disconnect” situation, as well as the Engine object’s abilityto correctly invalidate the current connection pool when this conditionoccurs and allowing the current Connection to re-validate ontoa new DBAPI connection.

Disconnect Handling - Optimistic

When pessimistic handling is not employed, as well as when the database isshutdown and/or restarted in the middle of a connection’s period of use withina transaction, the other approach to dealing with stale / closed connections isto let SQLAlchemy handle disconnects as they occur, at which point allconnections in the pool are invalidated, meaning they are assumed to bestale and will be refreshed upon next checkout. This behavior assumes thePool is used in conjunction with a Engine.The Engine has logic which can detectdisconnection events and refresh the pool automatically.

When the Connection attempts to use a DBAPI connection, and anexception is raised that corresponds to a “disconnect” event, the connectionis invalidated. The Connection then calls the Pool.recreate()method, effectively invalidating all connections not currently checked out sothat they are replaced with new ones upon next checkout. This flow isillustrated by the code example below:

  1. from sqlalchemy import create_engine, exc
  2. e = create_engine(...)
  3. c = e.connect()
  4.  
  5. try:
  6. # suppose the database has been restarted.
  7. c.execute("SELECT * FROM table")
  8. c.close()
  9. except exc.DBAPIError, e:
  10. # an exception is raised, Connection is invalidated.
  11. if e.connection_invalidated:
  12. print("Connection was invalidated!")
  13.  
  14. # after the invalidate event, a new connection
  15. # starts with a new Pool
  16. c = e.connect()
  17. c.execute("SELECT * FROM table")

The above example illustrates that no special intervention is needed torefresh the pool, which continues normally after a disconnection event isdetected. However, one database exception is raised, per each connectionthat is in use while the database unavailability event occurred.In a typical web application using an ORM Session, the above condition wouldcorrespond to a single request failing with a 500 error, then the web applicationcontinuing normally beyond that. Hence the approach is “optimistic” in that frequentdatabase restarts are not anticipated.

Setting Pool Recycle

An additional setting that can augment the “optimistic” approach is to set thepool recycle parameter. This parameter prevents the pool from using a particularconnection that has passed a certain age, and is appropriate for database backendssuch as MySQL that automatically close connections that have been stale after a particularperiod of time:

  1. from sqlalchemy import create_engine
  2. e = create_engine("mysql://scott:tiger@localhost/test", pool_recycle=3600)

Above, any DBAPI connection that has been open for more than one hour will be invalidated and replaced,upon next checkout. Note that the invalidation only occurs during checkout - not onany connections that are held in a checked out state. pool_recycle is a functionof the Pool itself, independent of whether or not an Engine is in use.

More on Invalidation

The Pool provides “connection invalidation” services which allowboth explicit invalidation of a connection as well as automatic invalidationin response to conditions that are determined to render a connection unusable.

“Invalidation” means that a particular DBAPI connection is removed from thepool and discarded. The .close() method is called on this connectionif it is not clear that the connection itself might not be closed, howeverif this method fails, the exception is logged but the operation still proceeds.

When using a Engine, the Connection.invalidate() method isthe usual entrypoint to explicit invalidation. Other conditions by whicha DBAPI connection might be invalidated include:

  • a DBAPI exception such as OperationalError, raised when amethod like connection.execute() is called, is detected as indicatinga so-called “disconnect” condition. As the Python DBAPI provides nostandard system for determining the nature of an exception, all SQLAlchemydialects include a system called is_disconnect() which will examinethe contents of an exception object, including the string message andany potential error codes included with it, in order to determine if thisexception indicates that the connection is no longer usable. If this is thecase, the _ConnectionFairy.invalidate() method is called and theDBAPI connection is then discarded.

  • When the connection is returned to the pool, andcalling the connection.rollback() or connection.commit() methods,as dictated by the pool’s “reset on return” behavior, throws an exception.A final attempt at calling .close() on the connection will be made,and it is then discarded.

  • When a listener implementing PoolEvents.checkout() raises theDisconnectionError exception, indicating that the connectionwon’t be usable and a new connection attempt needs to be made.

All invalidations which occur will invoke the PoolEvents.invalidate()event.

Using FIFO vs. LIFO

The QueuePool class features a flag calledQueuePool.use_lifo, which can also be accessed fromcreate_engine() via the flag create_engine.pool_use_lifo.Setting this flag to True causes the pool’s “queue” behavior to instead bethat of a “stack”, e.g. the last connection to be returned to the pool is thefirst one to be used on the next request. In contrast to the pool’s long-standing behavior of first-in-first-out, which produces a round-robin effect ofusing each connection in the pool in series, lifo mode allows excessconnections to remain idle in the pool, allowing server-side timeout schemes toclose these connections out. The difference between FIFO and LIFO isbasically whether or not its desirable for the pool to keep a full set ofconnections ready to go even during idle periods:

  1. engine = create_engine(
  2. "postgreql://", pool_use_lifo=True, pool_pre_ping=True)

Above, we also make use of the create_engine.pool_pre_ping flagso that connections which are closed from the server side are gracefullyhandled by the connection pool and replaced with a new connection.

Note that the flag only applies to QueuePool use.

New in version 1.3.

See also

Dealing with Disconnects

Using Connection Pools with Multiprocessing

It’s critical that when using a connection pool, and by extension whenusing an Engine created via create_engine(), thatthe pooled connections are not shared to a forked process. TCP connectionsare represented as file descriptors, which usually work across processboundaries, meaning this will cause concurrent access to the file descriptoron behalf of two or more entirely independent Python interpreter states.

There are two approaches to dealing with this.

The first is, either create a new Engine within the childprocess, or upon an existing Engine, call Engine.dispose()before the child process uses any connections. This will remove all existingconnections from the pool so that it makes all new ones. Below isa simple version using multiprocessing.Process, but this ideashould be adapted to the style of forking in use:

  1. engine = create_engine("...")
  2.  
  3. def run_in_process():
  4. engine.dispose()
  5.  
  6. with engine.connect() as conn:
  7. conn.execute("...")
  8.  
  9. p = Process(target=run_in_process)

The next approach is to instrument the Pool itself with eventsso that connections are automatically invalidated in the subprocess.This is a little more magical but probably more foolproof:

  1. from sqlalchemy import event
  2. from sqlalchemy import exc
  3. import os
  4.  
  5. engine = create_engine("...")
  6.  
  7. @event.listens_for(engine, "connect")
  8. def connect(dbapi_connection, connection_record):
  9. connection_record.info['pid'] = os.getpid()
  10.  
  11. @event.listens_for(engine, "checkout")
  12. def checkout(dbapi_connection, connection_record, connection_proxy):
  13. pid = os.getpid()
  14. if connection_record.info['pid'] != pid:
  15. connection_record.connection = connection_proxy.connection = None
  16. raise exc.DisconnectionError(
  17. "Connection record belongs to pid %s, "
  18. "attempting to check out in pid %s" %
  19. (connection_record.info['pid'], pid)
  20. )

Above, we use an approach similar to that described inDisconnect Handling - Pessimistic to treat a DBAPI connection thatoriginated in a different parent process as an “invalid” connection,coercing the pool to recycle the connection record to make a new connection.

API Documentation - Available Pool Implementations

  • class sqlalchemy.pool.Pool(creator, recycle=-1, echo=None, use_threadlocal=False, logging_name=None, reset_on_return=True, listeners=None, events=None, dialect=None, pre_ping=False, _dispatch=None)
  • Bases: sqlalchemy.log.Identified

Abstract base class for connection pools.

  • init(creator, recycle=-1, echo=None, use_threadlocal=False, logging_name=None, reset_on_return=True, listeners=None, events=None, dialect=None, pre_ping=False, dispatch=None_)[](https://docs.sqlalchemy.org/en/13/core/#sqlalchemy.pool.Pool.init__)
  • Construct a Pool.

    • Parameters
      • creator – a callable function that returns a DB-APIconnection object. The function will be called withparameters.

      • recycle – If set to a value other than -1, number ofseconds between connection recycling, which means uponcheckout, if this timeout is surpassed the connection will beclosed and replaced with a newly opened connection. Defaults to -1.

      • logging_name – String identifier which will be used withinthe “name” field of logging records generated within the“sqlalchemy.pool” logger. Defaults to a hexstring of the object’sid.

      • echo

if True, the connection pool will loginformational output such as when connections are invalidatedas well as when connections are recycled to the default log handler,which defaults to sys.stdout for output.. If set to the string"debug", the logging will include pool checkouts and checkins.

The Pool.echo parameter can also be set from thecreate_engine() call by using thecreate_engine.echo_pool parameter.

See also

Configuring Logging - further detail on how to configurelogging.

  1. -

use_threadlocal

If set to True, repeated calls toconnect() within the same application thread will beguaranteed to return the same connection object that is alreadychecked out. This is a legacy use case and the flag has noeffect when using the pool with a Engine object.

Deprecated since version 1.3: The Pool.use_threadlocal parameter is deprecated and will be removed in a future release.

  1. -

reset_on_return

Determine steps to take onconnections as they are returned to the pool.reset_on_return can have any of these values:

  1. -

"rollback" - call rollback() on the connection,to release locks and transaction resources.This is the default value. The vast majorityof use cases should leave this value set.

  1. -

True - same as ‘rollback’, this is here forbackwards compatibility.

  1. -

"commit" - call commit() on the connection,to release locks and transaction resources.A commit here may be desirable for databases thatcache query plans if a commit is emitted,such as Microsoft SQL Server. However, thisvalue is more dangerous than ‘rollback’ becauseany data changes present on the transactionare committed unconditionally.

  1. -

None - don’t do anything on the connection.This setting should generally only be made on a databasethat has no transaction support at all,namely MySQL MyISAM; when used on this backend, performancecan be improved as the “rollback” call is still expensive onMySQL. It is strongly recommended that this setting not beused for transaction-supporting databases in conjunction witha persistent pool such as QueuePool, as it opensthe possibility for connections still in a transaction to beidle in the pool. The setting may be appropriate in thecase of NullPool or special circumstances wherethe connection pool in use is not being used to maintain connectionlifecycle.

  1. -

False - same as None, this is here forbackwards compatibility.

  1. -

events – a list of 2-tuples, each of the form(callable, target) which will be passed to event.listen()upon construction. Provided here so that event listenerscan be assigned via create_engine() before dialect-levellisteners are applied.

  1. -

listeners

A list of PoolListener-like objects ordictionaries of callables that receive events when DB-APIconnections are created, checked out and checked in to thepool.

Deprecated since version 0.7: PoolListener is deprecated in favor of the PoolEvents listener interface. The Pool.listeners parameter will be removed in a future release.

  1. -

dialect

a Dialect that will handle the jobof calling rollback(), close(), or commit() on DBAPI connections.If omitted, a built-in “stub” dialect is used. Applications thatmake use of create_engine() should not use this parameteras it is handled by the engine creation strategy.

New in version 1.1: - dialect is now a public parameterto the Pool.

  1. -

pre_ping

if True, the pool will emit a “ping” (typically“SELECT 1”, but is dialect-specific) on the connectionupon checkout, to test if the connection is alive or not. If not,the connection is transparently re-connected and upon success, allother pooled connections established prior to that timestamp areinvalidated. Requires that a dialect is passed as well tointerpret the disconnection error.

New in version 1.2.

  • connect()
  • Return a DBAPI connection from the pool.

The connection is instrumented such that when itsclose() method is called, the connection will be returned tothe pool.

  • dispose()
  • Dispose of this pool.

This method leaves the possibility of checked-out connectionsremaining open, as it only affects connections that areidle in the pool.

See also

Pool.recreate()

  • recreate()
  • Return a new Pool, of the same class as this oneand configured with identical creation arguments.

This method is used in conjunction with dispose()to close out an entire Pool and create a new one inits place.

  • unique_connection()
  • Produce a DBAPI connection that is not referenced by anythread-local context.

This method is equivalent to Pool.connect() when thePool.use_threadlocal flag is not set to True.When Pool.use_threadlocal is True, thePool.unique_connection() method provides a means of bypassingthe threadlocal context.

  • class sqlalchemy.pool.QueuePool(creator, pool_size=5, max_overflow=10, timeout=30, use_lifo=False, **kw)
  • Bases: sqlalchemy.pool.base.Pool

A Pool that imposes a limit on the number of open connections.

QueuePool is the default pooling implementation used forall Engine objects, unless the SQLite dialect is in use.

  • init(creator, pool_size=5, max_overflow=10, timeout=30, use_lifo=False, **kw)
  • Construct a QueuePool.

    • Parameters
      • creator – a callable function that returns a DB-APIconnection object, same as that of Pool.creator.

      • pool_size – The size of the pool to be maintained,defaults to 5. This is the largest number of connections thatwill be kept persistently in the pool. Note that the poolbegins with no connections; once this number of connectionsis requested, that number of connections will remain.pool_size can be set to 0 to indicate no size limit; todisable pooling, use a NullPoolinstead.

      • max_overflow – The maximum overflow size of thepool. When the number of checked-out connections reaches thesize set in poolsize, additional connections will bereturned up to this limit. When those additional connectionsare returned to the pool, they are disconnected anddiscarded. It follows then that the total number ofsimultaneous connections the pool will allow is pool_size +_max_overflow, and the total number of “sleeping”connections the pool will allow is pool_size. _max_overflow_can be set to -1 to indicate no overflow limit; no limitwill be placed on the total number of concurrentconnections. Defaults to 10.

      • timeout – The number of seconds to wait before giving upon returning a connection. Defaults to 30.

      • use_lifo

use LIFO (last-in-first-out) when retrievingconnections instead of FIFO (first-in-first-out). Using LIFO, aserver-side timeout scheme can reduce the number of connections usedduring non-peak periods of use. When planning for server-sidetimeouts, ensure that a recycle or pre-ping strategy is in use togracefully handle stale connections.

New in version 1.3.

See also

Using FIFO vs. LIFO

Dealing with Disconnects

  1. -

**kw – Other keyword arguments includingPool.recycle, Pool.echo,Pool.reset_on_return and others are passed to thePool constructor.

  • connect()

inherited from the connect() method of Pool

Return a DBAPI connection from the pool.

The connection is instrumented such that when itsclose() method is called, the connection will be returned tothe pool.

  • unique_connection()

inherited from the uniqueconnection() _method of Pool

Produce a DBAPI connection that is not referenced by anythread-local context.

This method is equivalent to Pool.connect() when thePool.use_threadlocal flag is not set to True.When Pool.use_threadlocal is True, thePool.unique_connection() method provides a means of bypassingthe threadlocal context.

  • class sqlalchemy.pool.SingletonThreadPool(creator, pool_size=5, **kw)
  • Bases: sqlalchemy.pool.base.Pool

A Pool that maintains one connection per thread.

Maintains one connection per each thread, never moving a connection to athread other than the one which it was created in.

Warning

the SingletonThreadPool will call .close()on arbitrary connections that exist beyond the size setting ofpool_size, e.g. if more unique thread identitiesthan what pool_size states are used. This cleanup isnon-deterministic and not sensitive to whether or not the connectionslinked to those thread identities are currently in use.

SingletonThreadPool may be improved in a future release,however in its current status it is generally used only for testscenarios using a SQLite :memory: database and is not recommendedfor production use.

Options are the same as those of Pool, as well as:

  • Parameters
  • pool_size – The number of threads in which to maintain connectionsat once. Defaults to five.

SingletonThreadPool is used by the SQLite dialectautomatically when a memory-based database is used.See SQLite.

  • init(creator, pool_size=5, **kw)
  • Construct a Pool.

    • Parameters
      • creator – a callable function that returns a DB-APIconnection object. The function will be called withparameters.

      • recycle – If set to a value other than -1, number ofseconds between connection recycling, which means uponcheckout, if this timeout is surpassed the connection will beclosed and replaced with a newly opened connection. Defaults to -1.

      • logging_name – String identifier which will be used withinthe “name” field of logging records generated within the“sqlalchemy.pool” logger. Defaults to a hexstring of the object’sid.

      • echo

if True, the connection pool will loginformational output such as when connections are invalidatedas well as when connections are recycled to the default log handler,which defaults to sys.stdout for output.. If set to the string"debug", the logging will include pool checkouts and checkins.

The Pool.echo parameter can also be set from thecreate_engine() call by using thecreate_engine.echo_pool parameter.

See also

Configuring Logging - further detail on how to configurelogging.

  1. -

use_threadlocal

If set to True, repeated calls toconnect() within the same application thread will beguaranteed to return the same connection object that is alreadychecked out. This is a legacy use case and the flag has noeffect when using the pool with a Engine object.

Deprecated since version 1.3: The Pool.use_threadlocal parameter is deprecated and will be removed in a future release.

  1. -

reset_on_return

Determine steps to take onconnections as they are returned to the pool.reset_on_return can have any of these values:

  1. -

"rollback" - call rollback() on the connection,to release locks and transaction resources.This is the default value. The vast majorityof use cases should leave this value set.

  1. -

True - same as ‘rollback’, this is here forbackwards compatibility.

  1. -

"commit" - call commit() on the connection,to release locks and transaction resources.A commit here may be desirable for databases thatcache query plans if a commit is emitted,such as Microsoft SQL Server. However, thisvalue is more dangerous than ‘rollback’ becauseany data changes present on the transactionare committed unconditionally.

  1. -

None - don’t do anything on the connection.This setting should generally only be made on a databasethat has no transaction support at all,namely MySQL MyISAM; when used on this backend, performancecan be improved as the “rollback” call is still expensive onMySQL. It is strongly recommended that this setting not beused for transaction-supporting databases in conjunction witha persistent pool such as QueuePool, as it opensthe possibility for connections still in a transaction to beidle in the pool. The setting may be appropriate in thecase of NullPool or special circumstances wherethe connection pool in use is not being used to maintain connectionlifecycle.

  1. -

False - same as None, this is here forbackwards compatibility.

  1. -

events – a list of 2-tuples, each of the form(callable, target) which will be passed to event.listen()upon construction. Provided here so that event listenerscan be assigned via create_engine() before dialect-levellisteners are applied.

  1. -

listeners

A list of PoolListener-like objects ordictionaries of callables that receive events when DB-APIconnections are created, checked out and checked in to thepool.

Deprecated since version 0.7: PoolListener is deprecated in favor of the PoolEvents listener interface. The Pool.listeners parameter will be removed in a future release.

  1. -

dialect

a Dialect that will handle the jobof calling rollback(), close(), or commit() on DBAPI connections.If omitted, a built-in “stub” dialect is used. Applications thatmake use of create_engine() should not use this parameteras it is handled by the engine creation strategy.

New in version 1.1: - dialect is now a public parameterto the Pool.

  1. -

pre_ping

if True, the pool will emit a “ping” (typically“SELECT 1”, but is dialect-specific) on the connectionupon checkout, to test if the connection is alive or not. If not,the connection is transparently re-connected and upon success, allother pooled connections established prior to that timestamp areinvalidated. Requires that a dialect is passed as well tointerpret the disconnection error.

New in version 1.2.

  • class sqlalchemy.pool.AssertionPool(*args, **kw)
  • Bases: sqlalchemy.pool.base.Pool

A Pool that allows at most one checked out connection atany given time.

This will raise an exception if more than one connection is checked outat a time. Useful for debugging code that is using more connectionsthan desired.

  • class sqlalchemy.pool.NullPool(creator, recycle=-1, echo=None, use_threadlocal=False, logging_name=None, reset_on_return=True, listeners=None, events=None, dialect=None, pre_ping=False, _dispatch=None)
  • Bases: sqlalchemy.pool.base.Pool

A Pool which does not pool connections.

Instead it literally opens and closes the underlying DB-API connectionper each connection open/close.

Reconnect-related functions such as recycle and connectioninvalidation are not supported by this Pool implementation, sinceno connections are held persistently.

  • class sqlalchemy.pool.StaticPool(creator, recycle=-1, echo=None, use_threadlocal=False, logging_name=None, reset_on_return=True, listeners=None, events=None, dialect=None, pre_ping=False, _dispatch=None)
  • Bases: sqlalchemy.pool.base.Pool

A Pool of exactly one connection, used for all requests.

Reconnect-related functions such as recycle and connectioninvalidation (which is also used to support auto-reconnect) are notcurrently supported by this Pool implementation but may be implementedin a future release.

  • class sqlalchemy.pool.ConnectionFairy(_dbapi_connection, connection_record, echo)
  • Proxies a DBAPI connection and provides return-on-dereferencesupport.

This is an internal object used by the Pool implementationto provide context management to a DBAPI connection delivered bythat Pool.

The name “fairy” is inspired by the fact that the_ConnectionFairy object’s lifespan is transitory, as it lastsonly for the length of a specific DBAPI connection being checked out fromthe pool, and additionally that as a transparent proxy, it is mostlyinvisible.

See also

_ConnectionRecord

  • connection_record = None_
  • A reference to the _ConnectionRecord object associatedwith the DBAPI connection.

This is currently an internal accessor which is subject to change.

  • connection = None
  • A reference to the actual DBAPI connection being tracked.

  • cursor(*args, **kwargs)

  • Return a new DBAPI cursor for the underlying connection.

This method is a proxy for the connection.cursor() DBAPImethod.

  • detach()
  • Separate this connection from its Pool.

This means that the connection will no longer be returned to thepool when closed, and will instead be literally closed. Thecontaining ConnectionRecord is separated from the DB-API connection,and will create a new connection when next used.

Note that any overall connection limiting constraints imposed by aPool implementation may be violated after a detach, as the detachedconnection is removed from the pool’s knowledge and control.

  • info
  • Info dictionary associated with the underlying DBAPI connectionreferred to by this ConnectionFairy, allowing user-defineddata to be associated with the connection.

The data here will follow along with the DBAPI connection includingafter it is returned to the connection pool and used againin subsequent instances of _ConnectionFairy. It is sharedwith the _ConnectionRecord.info and Connection.infoaccessors.

The dictionary associated with a particular DBAPI connection isdiscarded when the connection itself is discarded.

  • invalidate(e=None, soft=False)
  • Mark this connection as invalidated.

This method can be called directly, and is also called as a resultof the Connection.invalidate() method. When invoked,the DBAPI connection is immediately closed and discarded fromfurther use by the pool. The invalidation mechanism proceedsvia the _ConnectionRecord.invalidate() internal method.

  1. - Parameters
  2. -
  3. -

e – an exception object indicating a reason for the invalidation.

  1. -

soft

if True, the connection isn’t closed; instead, thisconnection will be recycled on next checkout.

New in version 1.0.3.

See also

More on Invalidation

  • property is_valid
  • Return True if this _ConnectionFairy still refersto an active DBAPI connection.

  • property record_info

  • Info dictionary associated with the _ConnectionRecordcontainer referred to by this :class:.ConnectionFairy`.

Unlike the _ConnectionFairy.info dictionary, the lifespanof this dictionary is persistent across connections that aredisconnected and/or invalidated within the lifespan of a_ConnectionRecord.

New in version 1.1.

  • class sqlalchemy.pool.ConnectionRecord(_pool, connect=True)
  • Internal object which maintains an individual DBAPI connectionreferenced by a Pool.

The _ConnectionRecord object always exists for any particularDBAPI connection whether or not that DBAPI connection has been“checked out”. This is in contrast to the _ConnectionFairywhich is only a public facade to the DBAPI connection while it is checkedout.

A _ConnectionRecord may exist for a span longer than thatof a single DBAPI connection. For example, if the_ConnectionRecord.invalidate()method is called, the DBAPI connection associated with this_ConnectionRecordwill be discarded, but the _ConnectionRecord may be used again,in which case a new DBAPI connection is produced when the Poolnext uses this record.

The _ConnectionRecord is delivered along with connectionpool events, including PoolEvents.connect() andPoolEvents.checkout(), however _ConnectionRecord stillremains an internal object whose API and internals may change.

See also

_ConnectionFairy

  • connection = None
  • A reference to the actual DBAPI connection being tracked.

May be None if this _ConnectionRecord has been markedas invalidated; a new DBAPI connection may replace it if the owningpool calls upon this _ConnectionRecord to reconnect.

  • info
  • The .info dictionary associated with the DBAPI connection.

This dictionary is shared among the _ConnectionFairy.infoand Connection.info accessors.

Note

The lifespan of this dictionary is linked to theDBAPI connection itself, meaning that it is discarded each timethe DBAPI connection is closed and/or invalidated. The_ConnectionRecord.record_info dictionary remainspersistent throughout the lifespan of the_ConnectionRecord container.

  • invalidate(e=None, soft=False)
  • Invalidate the DBAPI connection held by this _ConnectionRecord.

This method is called for all connection invalidations, includingwhen the _ConnectionFairy.invalidate() orConnection.invalidate() methods are called, as well as when anyso-called “automatic invalidation” condition occurs.

  1. - Parameters
  2. -
  3. -

e – an exception object indicating a reason for the invalidation.

  1. -

soft

if True, the connection isn’t closed; instead, thisconnection will be recycled on next checkout.

New in version 1.0.3.

See also

More on Invalidation

  • record_info
  • An “info’ dictionary associated with the connection recorditself.

Unlike the _ConnectionRecord.info dictionary, which is linkedto the lifespan of the DBAPI connection, this dictionary is linkedto the lifespan of the _ConnectionRecord container itselfand will remain persistent throughout the life of the_ConnectionRecord.

New in version 1.1.