Working with Engines and Connections

This section details direct usage of the Engine,Connection, and related objects. Its important to note that whenusing the SQLAlchemy ORM, these objects are not generally accessed; instead,the Session object is used as the interface to the database.However, for applications that are built around direct usage of textual SQLstatements and/or SQL expression constructs without involvement by the ORM’shigher level management services, the Engine andConnection are king (and queen?) - read on.

Basic Usage

Recall from Engine Configuration that an Engine is created viathe create_engine() call:

  1. engine = create_engine('mysql://scott:tiger@localhost/test')

The typical usage of create_engine() is once per particular databaseURL, held globally for the lifetime of a single application process. A singleEngine manages many individual DBAPI connections on behalf of theprocess and is intended to be called upon in a concurrent fashion. TheEngine is not synonymous to the DBAPI connect function,which represents just one connection resource - the Engine is mostefficient when created just once at the module level of an application, notper-object or per-function call.

For a multiple-process application that uses the os.fork system call, orfor example the Python multiprocessing module, it’s usually required that aseparate Engine be used for each child process. This is because theEngine maintains a reference to a connection pool that ultimatelyreferences DBAPI connections - these tend to not be portable across processboundaries. An Engine that is configured not to use pooling (whichis achieved via the usage of NullPool) does not have thisrequirement.

The engine can be used directly to issue SQL to the database. The most genericway is first procure a connection resource, which you get via theEngine.connect() method:

  1. connection = engine.connect()
  2. result = connection.execute("select username from users")
  3. for row in result:
  4. print("username:", row['username'])
  5. connection.close()

The connection is an instance of Connection,which is a proxy object for an actual DBAPI connection. The DBAPIconnection is retrieved from the connection pool at the point at whichConnection is created.

The returned result is an instance of ResultProxy, whichreferences a DBAPI cursor and provides a largely compatible interfacewith that of the DBAPI cursor. The DBAPI cursor will be closedby the ResultProxy when all of its result rows (if any) areexhausted. A ResultProxy that returns no rows, such as that ofan UPDATE statement (without any returned rows),releases cursor resources immediately upon construction.

When the close() method is called, the referenced DBAPIconnection is released to the connection pool. From the perspectiveof the database itself, nothing is actually “closed”, assuming pooling isin use. The pooling mechanism issues a rollback() call on the DBAPIconnection so that any transactional state or locks are removed, andthe connection is ready for its next usage.

The above procedure can be performed in a shorthand way by using theexecute() method of Engine itself:

  1. result = engine.execute("select username from users")
  2. for row in result:
  3. print("username:", row['username'])

Where above, the execute() method acquires a newConnection on its own, executes the statement with that object,and returns the ResultProxy. In this case, the ResultProxycontains a special flag known as close_with_result, which indicatesthat when its underlying DBAPI cursor is closed, the Connectionobject itself is also closed, which again returns the DBAPI connectionto the connection pool, releasing transactional resources.

If the ResultProxy potentially has rows remaining, it can beinstructed to close out its resources explicitly:

  1. result.close()

If the ResultProxy has pending rows remaining and is dereferenced bythe application without being closed, Python garbage collection willultimately close out the cursor as well as trigger a return of the pooledDBAPI connection resource to the pool (SQLAlchemy achieves this by the usageof weakref callbacks - never the del method) - however it’s never agood idea to rely upon Python garbage collection to manage resources.

Our example above illustrated the execution of a textual SQL string.The execute() method can of course accommodate more thanthat, including the variety of SQL expression constructs describedin SQL Expression Language Tutorial.

Using Transactions

Note

This section describes how to use transactions when working directlywith Engine and Connection objects. When using theSQLAlchemy ORM, the public API for transaction control is via theSession object, which makes usage of the Transactionobject internally. See Managing Transactions for furtherinformation.

The Connection object provides a begin()method which returns a Transaction object.This object is usually used within a try/except clause so that it isguaranteed to invoke Transaction.rollback() or Transaction.commit():

  1. connection = engine.connect()
  2. trans = connection.begin()
  3. try:
  4. r1 = connection.execute(table1.select())
  5. connection.execute(table1.insert(), col1=7, col2='this is some data')
  6. trans.commit()
  7. except:
  8. trans.rollback()
  9. raise

The above block can be created more succinctly using contextmanagers, either given an Engine:

  1. # runs a transaction
  2. with engine.begin() as connection:
  3. r1 = connection.execute(table1.select())
  4. connection.execute(table1.insert(), col1=7, col2='this is some data')

Or from the Connection, in which case the Transaction objectis available as well:

  1. with connection.begin() as trans:
  2. r1 = connection.execute(table1.select())
  3. connection.execute(table1.insert(), col1=7, col2='this is some data')

Nesting of Transaction Blocks

The Transaction object also handles “nested”behavior by keeping track of the outermost begin/commit pair. In this example,two functions both issue a transaction on a Connection, but only the outermostTransaction object actually takes effect when it is committed.

  1. # method_a starts a transaction and calls method_b
  2. def method_a(connection):
  3. trans = connection.begin() # open a transaction
  4. try:
  5. method_b(connection)
  6. trans.commit() # transaction is committed here
  7. except:
  8. trans.rollback() # this rolls back the transaction unconditionally
  9. raise
  10.  
  11. # method_b also starts a transaction
  12. def method_b(connection):
  13. trans = connection.begin() # open a transaction - this runs in the context of method_a's transaction
  14. try:
  15. connection.execute("insert into mytable values ('bat', 'lala')")
  16. connection.execute(mytable.insert(), col1='bat', col2='lala')
  17. trans.commit() # transaction is not committed yet
  18. except:
  19. trans.rollback() # this rolls back the transaction unconditionally
  20. raise
  21.  
  22. # open a Connection and call method_a
  23. conn = engine.connect()
  24. method_a(conn)
  25. conn.close()

Above, method_a is called first, which calls connection.begin(). Thenit calls method_b. When method_b calls connection.begin(), it justincrements a counter that is decremented when it calls commit(). If eithermethod_a or method_b calls rollback(), the whole transaction isrolled back. The transaction is not committed until method_a calls thecommit() method. This “nesting” behavior allows the creation of functionswhich “guarantee” that a transaction will be used if one was not alreadyavailable, but will automatically participate in an enclosing transaction ifone exists.

Understanding Autocommit

The previous transaction example illustrates how to use Transactionso that several executions can take part in the same transaction. What happenswhen we issue an INSERT, UPDATE or DELETE call without usingTransaction? While some DBAPIimplementations provide various special “non-transactional” modes, the corebehavior of DBAPI per PEP-0249 is that a transaction is always in progress,providing only rollback() and commit() methods but no begin().SQLAlchemy assumes this is the case for any given DBAPI.

Given this requirement, SQLAlchemy implements its own “autocommit” feature whichworks completely consistently across all backends. This is achieved bydetecting statements which represent data-changing operations, i.e. INSERT,UPDATE, DELETE, as well as data definition language (DDL) statements such asCREATE TABLE, ALTER TABLE, and then issuing a COMMIT automatically if notransaction is in progress. The detection is based on the presence of theautocommit=True execution option on the statement. If the statementis a text-only statement and the flag is not set, a regular expression is usedto detect INSERT, UPDATE, DELETE, as well as a variety of other commandsfor a particular backend:

  1. conn = engine.connect()
  2. conn.execute("INSERT INTO users VALUES (1, 'john')") # autocommits

The “autocommit” feature is only in effect when no Transaction hasotherwise been declared. This means the feature is not generally used withthe ORM, as the Session object by default always maintains anongoing Transaction.

Full control of the “autocommit” behavior is available using the generativeConnection.execution_options() method provided on Connection,Engine, Executable, using the “autocommit” flag which willturn on or off the autocommit for the selected scope. For example, atext() construct representing a stored procedure that commits might useit so that a SELECT statement will issue a COMMIT:

  1. engine.execute(text("SELECT my_mutating_procedure()").execution_options(autocommit=True))

Connectionless Execution, Implicit Execution

Recall from the first section we mentioned executing with and without explicitusage of Connection. “Connectionless” executionrefers to the usage of the execute() method on an object which is not aConnection. This was illustrated using the execute() methodof Engine:

  1. result = engine.execute("select username from users")
  2. for row in result:
  3. print("username:", row['username'])

In addition to “connectionless” execution, it is also possibleto use the execute() method ofany Executable construct, which is a marker for SQL expression objectsthat support execution. The SQL expression object itself references anEngine or Connection known as the bind, which it usesin order to provide so-called “implicit” execution services.

Given a table as below:

  1. from sqlalchemy import MetaData, Table, Column, Integer
  2.  
  3. meta = MetaData()
  4. users_table = Table('users', meta,
  5. Column('id', Integer, primary_key=True),
  6. Column('name', String(50))
  7. )

Explicit execution delivers the SQL text or constructed SQL expression to theexecute() method of Connection:

  1. engine = create_engine('sqlite:///file.db')
  2. connection = engine.connect()
  3. result = connection.execute(users_table.select())
  4. for row in result:
  5. # ....
  6. connection.close()

Explicit, connectionless execution delivers the expression to theexecute() method of Engine:

  1. engine = create_engine('sqlite:///file.db')
  2. result = engine.execute(users_table.select())
  3. for row in result:
  4. # ....
  5. result.close()

Implicit execution is also connectionless, and makes usage of the execute() methodon the expression itself. This method is provided as part of theExecutable class, which refers to a SQL statement that is sufficientfor being invoked against the database. The method makes usage ofthe assumption that either anEngine orConnection has been bound to the expressionobject. By “bound” we mean that the special attribute MetaData.bindhas been used to associate a series ofTable objects and all SQL constructs derived from them with a specificengine:

  1. engine = create_engine('sqlite:///file.db')
  2. meta.bind = engine
  3. result = users_table.select().execute()
  4. for row in result:
  5. # ....
  6. result.close()

Above, we associate an Engine with a MetaData object usingthe special attribute MetaData.bind. The select() construct producedfrom the Table object has a method execute(), which willsearch for an Engine that’s “bound” to the Table.

Overall, the usage of “bound metadata” has three general effects:

  • SQL statement objects gain an Executable.execute() method which automaticallylocates a “bind” with which to execute themselves.

  • The ORM Session object supports using “bound metadata” in orderto establish which Engine should be used to invoke SQL statementson behalf of a particular mapped class, though the Sessionalso features its own explicit system of establishing complex Engine/mapped class configurations.

  • The MetaData.create_all(), MetaData.drop_all(), Table.create(),Table.drop(), and “autoload” features all make usage of the boundEngine automatically without the need to pass it explicitly.

Note

The concepts of “bound metadata” and “implicit execution” are not emphasized in modern SQLAlchemy.While they offer some convenience, they are no longer required by any API andare never necessary.

In applications where multiple Engine objects are present, each one logically associatedwith a certain set of tables (i.e. vertical sharding), the “bound metadata” technique can be usedso that individual Table can refer to the appropriate Engine automatically;in particular this is supported within the ORM via the Session objectas a means to associate Table objects with an appropriate Engine,as an alternative to using the bind arguments accepted directly by the Session.

However, the “implicit execution” technique is not at all appropriate for use with theORM, as it bypasses the transactional context maintained by the Session.

Overall, in the vast majority of cases, “bound metadata” and “implicit execution”are not useful. While “bound metadata” has a marginal level of usefulness with regards toORM configuration, “implicit execution” is a very old usage pattern that in mostcases is more confusing than it is helpful, and its usage is discouraged.Both patterns seem to encourage the overuse of expedient “short cuts” in application designwhich lead to problems later on.

Modern SQLAlchemy usage, especially the ORM, places a heavy stress on working within the contextof a transaction at all times; the “implicit execution” concept makes the job ofassociating statement execution with a particular transaction much more difficult.The Executable.execute() method on a particular SQL statementusually implies that the execution is not part of any particular transaction, which isusually not the desired effect.

In both “connectionless” examples, theConnection is created behind the scenes; theResultProxy returned by the execute()call references the Connection used to issuethe SQL statement. When the ResultProxy is closed, the underlyingConnection is closed for us, resulting in theDBAPI connection being returned to the pool with transactional resources removed.

Translation of Schema Names

To support multi-tenancy applications that distribute common sets of tablesinto multiple schemas, theConnection.execution_options.schema_translate_mapexecution option may be used to repurpose a set of Table objectsto render under different schema names without any changes.

Given a table:

  1. user_table = Table(
  2. 'user', metadata,
  3. Column('id', Integer, primary_key=True),
  4. Column('name', String(50))
  5. )

The “schema” of this Table as defined by theTable.schema attribute is None. TheConnection.execution_options.schema_translate_map can specifythat all Table objects with a schema of None would insteadrender the schema as user_schema_one:

  1. connection = engine.connect().execution_options(
  2. schema_translate_map={None: "user_schema_one"})
  3.  
  4. result = connection.execute(user_table.select())

The above code will invoke SQL on the database of the form:

  1. SELECT user_schema_one.user.id, user_schema_one.user.name FROM
  2. user_schema_one.user

That is, the schema name is substituted with our translated name. Themap can specify any number of target->destination schemas:

  1. connection = engine.connect().execution_options(
  2. schema_translate_map={
  3. None: "user_schema_one", # no schema name -> "user_schema_one"
  4. "special": "special_schema", # schema="special" becomes "special_schema"
  5. "public": None # Table objects with schema="public" will render with no schema
  6. })

The Connection.execution_options.schema_translate_map parameteraffects all DDL and SQL constructs generated from the SQL expression language,as derived from the Table or Sequence objects.It does not impact literal string SQL used via the expression.text()construct nor via plain strings passed to Connection.execute().

The feature takes effect only in those cases where the name of theschema is derived directly from that of a Table or Sequence;it does not impact methods where a string schema name is passed directly.By this pattern, it takes effect within the “can create” / “can drop” checksperformed by methods such as MetaData.create_all() orMetaData.drop_all() are called, and it takes effect whenusing table reflection given a Table object. However it doesnot affect the operations present on the Inspector object,as the schema name is passed to these methods explicitly.

New in version 1.1.

Engine Disposal

The Engine refers to a connection pool, which means under normalcircumstances, there are open database connections present while theEngine object is still resident in memory. When an Engineis garbage collected, its connection pool is no longer referred to bythat Engine, and assuming none of its connections are still checkedout, the pool and its connections will also be garbage collected, which has theeffect of closing out the actual database connections as well. But otherwise,the Engine will hold onto open database connections assumingit uses the normally default pool implementation of QueuePool.

The Engine is intended to normally be a permanentfixture established up-front and maintained throughout the lifespan of anapplication. It is not intended to be created and disposed on aper-connection basis; it is instead a registry that maintains both a poolof connections as well as configurational information about the databaseand DBAPI in use, as well as some degree of internal caching of per-databaseresources.

However, there are many cases where it is desirable that all connection resourcesreferred to by the Engine be completely closed out. It’sgenerally not a good idea to rely on Python garbage collection for thisto occur for these cases; instead, the Engine can be explicitly disposed usingthe Engine.dispose() method. This disposes of the engine’sunderlying connection pool and replaces it with a new one that’s empty.Provided that the Engineis discarded at this point and no longer used, all checked-in connectionswhich it refers to will also be fully closed.

Valid use cases for calling Engine.dispose() include:

  • When a program wants to release any remaining checked-in connectionsheld by the connection pool and expects to no longer be connectedto that database at all for any future operations.

  • When a program uses multiprocessing or fork(), and anEngine object is copied to the child process,Engine.dispose() should be called so that the engine createsbrand new database connections local to that fork. Database connectionsgenerally do not travel across process boundaries.

  • Within test suites or multitenancy scenarios where manyad-hoc, short-lived Engine objects may be created and disposed.

Connections that are checked out are not discarded when theengine is disposed or garbage collected, as these connections are stillstrongly referenced elsewhere by the application.However, after Engine.dispose() is called, thoseconnections are no longer associated with that Engine; when theyare closed, they will be returned to their now-orphaned connection poolwhich will ultimately be garbage collected, once all connections which referto it are also no longer referenced anywhere.Since this process is not easy to control, it is strongly recommended thatEngine.dispose() is called only after all checked out connectionsare checked in or otherwise de-associated from their pool.

An alternative for applications that are negatively impacted by theEngine object’s use of connection pooling is to disable poolingentirely. This typically incurs only a modest performance impact upon theuse of new connections, and means that when a connection is checked in,it is entirely closed out and is not held in memory. See Switching Pool Implementationsfor guidelines on how to disable pooling.

Using the Threadlocal Execution Strategy

The “threadlocal” engine strategy is an optional feature whichcan be used by non-ORM applications to associate transactionswith the current thread, such that all parts of theapplication can participate in that transaction implicitly without the need toexplicitly reference a Connection.

Deprecated since version 1.3: The “threadlocal” engine strategy is deprecated, and will be removedin a future release.

This strategy is designed for a particular pattern of usage which isgenerally considered as a legacy pattern. It has no impact on the“thread safety” of SQLAlchemy components or one’s application. It alsoshould not be used when using an ORMSession object, as theSession itself represents an ongoingtransaction and itself handles the job of maintaining connection andtransactional resources.

See also

“threadlocal” engine strategy deprecated

Enabling threadlocal is achieved as follows:

  1. db = create_engine('mysql://localhost/test', strategy='threadlocal')

The above Engine will now acquire a Connection usingconnection resources derived from a thread-local variable wheneverEngine.execute() or Engine.contextual_connect() is called. Thisconnection resource is maintained as long as it is referenced, which allowsmultiple points of an application to share a transaction while usingconnectionless execution:

  1. def call_operation1():
  2. engine.execute("insert into users values (?, ?)", 1, "john")
  3.  
  4. def call_operation2():
  5. users.update(users.c.user_id==5).execute(name='ed')
  6.  
  7. db.begin()
  8. try:
  9. call_operation1()
  10. call_operation2()
  11. db.commit()
  12. except:
  13. db.rollback()

Explicit execution can be mixed with connectionless execution byusing the Engine.connect() method to acquire a Connectionthat is not part of the threadlocal scope:

  1. db.begin()
  2. conn = db.connect()
  3. try:
  4. conn.execute(log_table.insert(), message="Operation started")
  5. call_operation1()
  6. call_operation2()
  7. db.commit()
  8. conn.execute(log_table.insert(), message="Operation succeeded")
  9. except:
  10. db.rollback()
  11. conn.execute(log_table.insert(), message="Operation failed")
  12. finally:
  13. conn.close()

To access the Connection that is bound to the threadlocal scope,call Engine.contextual_connect():

  1. conn = db.contextual_connect()
  2. call_operation3(conn)
  3. conn.close()

Calling close() on the “contextual” connection does not releaseits resources until all other usages of that resource are closed as well, includingthat any ongoing transactions are rolled back or committed.

Working with Raw DBAPI Connections

There are some cases where SQLAlchemy does not provide a genericized wayat accessing some DBAPI functions, such as calling stored procedures as wellas dealing with multiple result sets. In these cases, it’s just as expedientto deal with the raw DBAPI connection directly.

The most common way to access the raw DBAPI connection is to get itfrom an already present Connection object directly. It ispresent using the Connection.connection attribute:

  1. connection = engine.connect()
  2. dbapi_conn = connection.connection

The DBAPI connection here is actually a “proxied” in terms of theoriginating connection pool, however this is an implementation detailthat in most cases can be ignored. As this DBAPI connection is stillcontained within the scope of an owning Connection object, it isbest to make use of the Connection object for most features suchas transaction control as well as calling the Connection.close()method; if these operations are performed on the DBAPI connection directly,the owning Connection will not be aware of these changes in state.

To overcome the limitations imposed by the DBAPI connection that ismaintained by an owning Connection, a DBAPI connection is alsoavailable without the need to procure aConnection first, using the Engine.raw_connection() methodof Engine:

  1. dbapi_conn = engine.raw_connection()

This DBAPI connection is again a “proxied” form as was the case before.The purpose of this proxying is now apparent, as when we call the .close()method of this connection, the DBAPI connection is typically not actuallyclosed, but instead released back to theengine’s connection pool:

  1. dbapi_conn.close()

While SQLAlchemy may in the future add built-in patterns for more DBAPIuse cases, there are diminishing returns as these cases tend to be rarelyneeded and they also vary highly dependent on the type of DBAPI in use,so in any case the direct DBAPI calling pattern is always there for thosecases where it is needed.

Some recipes for DBAPI connection use follow.

Calling Stored Procedures

For stored procedures with special syntactical or parameter concerns,DBAPI-level callprocmay be used:

  1. connection = engine.raw_connection()
  2. try:
  3. cursor = connection.cursor()
  4. cursor.callproc("my_procedure", ['x', 'y', 'z'])
  5. results = list(cursor.fetchall())
  6. cursor.close()
  7. connection.commit()
  8. finally:
  9. connection.close()

Multiple Result Sets

Multiple result set support is available from a raw DBAPI cursor using thenextset method:

  1. connection = engine.raw_connection()
  2. try:
  3. cursor = connection.cursor()
  4. cursor.execute("select * from table1; select * from table2")
  5. results_one = cursor.fetchall()
  6. cursor.nextset()
  7. results_two = cursor.fetchall()
  8. cursor.close()
  9. finally:
  10. connection.close()

Registering New Dialects

The create_engine() function call locates the given dialectusing setuptools entrypoints. These entry points can be establishedfor third party dialects within the setup.py script. For example,to create a new dialect “foodialect://”, the steps are as follows:

  • Create a package called foodialect.

  • The package should have a module containing the dialect class,which is typically a subclass of sqlalchemy.engine.default.DefaultDialect.In this example let’s say it’s called FooDialect and its module is accessedvia foodialect.dialect.

  • The entry point can be established in setup.py as follows:

  1. entry_points="""
  2. [sqlalchemy.dialects]
  3. foodialect = foodialect.dialect:FooDialect
  4. """

If the dialect is providing support for a particular DBAPI on top ofan existing SQLAlchemy-supported database, the name can be givenincluding a database-qualification. For example, if FooDialectwere in fact a MySQL dialect, the entry point could be established like this:

  1. entry_points="""
  2. [sqlalchemy.dialects]
  3. mysql.foodialect = foodialect.dialect:FooDialect
  4. """

The above entrypoint would then be accessed as create_engine("mysql+foodialect://").

Registering Dialects In-Process

SQLAlchemy also allows a dialect to be registered within the current process, bypassingthe need for separate installation. Use the register() function as follows:

  1. from sqlalchemy.dialects import registry
  2. registry.register("mysql.foodialect", "myapp.dialect", "MyMySQLDialect")

The above will respond to create_engine("mysql+foodialect://") and load theMyMySQLDialect class from the myapp.dialect module.

Connection / Engine API

  • class sqlalchemy.engine.Connection(engine, connection=None, close_with_result=False, branchfrom=None, executionoptions=None, dispatch=None_, hasevents=None)
  • Bases: sqlalchemy.engine.Connectable

Provides high-level functionality for a wrapped DB-API connection.

Provides execution support for string-based SQL statements as well asClauseElement, Compiled and DefaultGeneratorobjects. Provides a begin() method to return Transactionobjects.

The Connection object is not thread-safe. While a Connection can beshared among threads using properly synchronized access, it is stillpossible that the underlying DBAPI connection may not support sharedaccess between threads. Check the DBAPI documentation for details.

The Connection object represents a single dbapi connection checked outfrom the connection pool. In this state, the connection pool has no affectupon the connection, including its expiration or timeout state. For theconnection pool to properly manage connections, connections should bereturned to the connection pool (i.e. connection.close()) whenever theconnection is not in use.

  • init(engine, connection=None, close_with_result=False, branchfrom=None, executionoptions=None, dispatch=None_, hasevents=None)
  • Construct a new Connection.

The constructor here is not public and is only called only by anEngine. See Engine.connect() andEngine.contextual_connect() methods.

  • begin()
  • Begin a transaction and return a transaction handle.

The returned object is an instance of Transaction.This object represents the “scope” of the transaction,which completes when either the Transaction.rollback()or Transaction.commit() method is called.

Nested calls to begin() on the same Connectionwill return new Transaction objects that representan emulated transaction within the scope of the enclosingtransaction, that is:

  1. trans = conn.begin() # outermost transaction
  2. trans2 = conn.begin() # "nested"
  3. trans2.commit() # does nothing
  4. trans.commit() # actually commits

Calls to Transaction.commit() only have an effectwhen invoked via the outermost Transaction object, though theTransaction.rollback() method of any of theTransaction objects will roll back thetransaction.

See also

Connection.begin_nested() - use a SAVEPOINT

Connection.begin_twophase() -use a two phase /XID transaction

Engine.begin() - context manager available fromEngine

  • begin_nested()
  • Begin a nested transaction and return a transaction handle.

The returned object is an instance of NestedTransaction.

Nested transactions require SAVEPOINT support in theunderlying database. Any transaction in the hierarchy maycommit and rollback, however the outermost transactionstill controls the overall commit or rollback of thetransaction of a whole.

See also

Connection.begin()

Connection.begin_twophase()

  • begintwophase(_xid=None)
  • Begin a two-phase or XA transaction and return a transactionhandle.

The returned object is an instance of TwoPhaseTransaction,which in addition to the methods provided byTransaction, also provides aprepare() method.

  1. - Parameters
  2. -

xid – the two phase transaction id. If not supplied, arandom id will be generated.

See also

Connection.begin()

Connection.begin_twophase()

This results in a release of the underlying databaseresources, that is, the DBAPI connection referencedinternally. The DBAPI connection is typically restoredback to the connection-holding Pool referencedby the Engine that produced thisConnection. Any transactional state present onthe DBAPI connection is also unconditionally released viathe DBAPI connection’s rollback() method, regardlessof any Transaction object that may beoutstanding with regards to this Connection.

After close() is called, theConnection is permanently in a closed state,and will allow no further operations.

  • property closed
  • Return True if this connection is closed.

  • connect()

  • Returns a branched version of this Connection.

The Connection.close() method on the returnedConnection can be called and thisConnection will remain open.

This method provides usage symmetry withEngine.connect(), including for usagewith context managers.

  • property connection
  • The underlying DB-API connection managed by this Connection.

See also

Working with Raw DBAPI Connections

  • property default_isolation_level
  • The default isolation level assigned to this Connection.

This is the isolation level setting that the Connectionhas when first procured via the Engine.connect() method.This level stays in place until theConnection.execution_options.isolation_level is usedto change the setting on a per-Connection basis.

Unlike Connection.get_isolation_level(), this attribute is setahead of time from the first connection procured by the dialect,so SQL query is not invoked when this accessor is called.

New in version 0.9.9.

See also

Connection.get_isolation_level() - view current level

create_engine.isolation_level- set per Engine isolation level

Connection.execution_options.isolation_level- set per Connection isolation level

  • detach()
  • Detach the underlying DB-API connection from its connection pool.

E.g.:

  1. with engine.connect() as conn:
  2. conn.detach()
  3. conn.execute("SET search_path TO schema1, schema2")
  4.  
  5. # work with connection
  6.  
  7. # connection is fully closed (since we used "with:", can
  8. # also call .close())

This Connection instance will remain usable. When closed(or exited from a context manager context as above),the DB-API connection will be literally closed and notreturned to its originating pool.

This method can be used to insulate the rest of an applicationfrom a modified state on a connection (such as a transactionisolation level or similar).

  • execute(object__, multiparams, *params_)
  • Executes a SQL statement construct and returns aResultProxy.

    • Parameters
      • object

The statement to be executed. May beone of:

  1. -

a plain string

  1. -

any ClauseElement construct that is alsoa subclass of Executable, such as aselect() construct

  1. -

a FunctionElement, such as that generatedby func, will be automatically wrapped ina SELECT statement, which is then executed.

  1. -

a DDLElement object

  1. -

a DefaultGenerator object

  1. -

a Compiled object

  1. -

*multiparams/params[](https://docs.sqlalchemy.org/en/13/core/#sqlalchemy.engine.Connection.execute.params.*multiparams/params) –

represent bound parametervalues to be used in the execution. Typically,the format is either a collection of one or moredictionaries passed to *multiparams:

  1. conn.execute(
  2. table.insert(),
  3. {"id":1, "value":"v1"},
  4. {"id":2, "value":"v2"}
  5. )

…or individual key/values interpreted by **params:

  1. conn.execute(
  2. table.insert(), id=1, value="v1"
  3. )

In the case that a plain SQL string is passed, and the underlyingDBAPI accepts positional bind parameters, a collection of tuplesor individual values in *multiparams may be passed:

  1. conn.execute(
  2. "INSERT INTO table (id, value) VALUES (?, ?)",
  3. (1, "v1"), (2, "v2")
  4. )
  5.  
  6. conn.execute(
  7. "INSERT INTO table (id, value) VALUES (?, ?)",
  8. 1, "v1"
  9. )

Note above, the usage of a question mark “?” or othersymbol is contingent upon the “paramstyle” accepted by the DBAPIin use, which may be any of “qmark”, “named”, “pyformat”, “format”,“numeric”. See pep-249for details on paramstyle.

To execute a textual SQL statement which uses bound parameters in aDBAPI-agnostic way, use the text() construct.

  • executionoptions(**opt_)
  • Set non-SQL options for the connection which take effectduring execution.

The method returns a copy of this Connection which referencesthe same underlying DBAPI connection, but also defines the givenexecution options which will take effect for a call toexecute(). As the new Connection references the sameunderlying resource, it’s usually a good idea to ensure that the copieswill be discarded immediately, which is implicit if used as in:

  1. result = connection.execution_options(stream_results=True).\
  2. execute(stmt)

Note that any key/value can be passed toConnection.execution_options(), and it will be stored in the_execution_options dictionary of the Connection. Itis suitable for usage by end-user schemes to communicate withevent listeners, for example.

The keywords that are currently recognized by SQLAlchemy itselfinclude all those listed under Executable.execution_options(),as well as others that are specific to Connection.

  1. - Parameters
  2. -
  3. -

autocommit – Available on: Connection, statement.When True, a COMMIT will be invoked after executionwhen executed in ‘autocommit’ mode, i.e. when an explicittransaction is not begun on the connection. Note that DBAPIconnections by default are always in a transaction - SQLAlchemy usesrules applied to different kinds of statements to determine ifCOMMIT will be invoked in order to provide its “autocommit” feature.Typically, all INSERT/UPDATE/DELETE statements as well asCREATE/DROP statements have autocommit behavior enabled; SELECTconstructs do not. Use this option when invoking a SELECT or otherspecific SQL construct where COMMIT is desired (typically whencalling stored procedures and such), and an explicittransaction is not in progress.

  1. -

compiled_cache

Available on: Connection.A dictionary where Compiled objectswill be cached when the Connection compiles a clauseexpression into a Compiled object.It is the user’s responsibility tomanage the size of this dictionary, which will have keyscorresponding to the dialect, clause element, the columnnames within the VALUES or SET clause of an INSERT or UPDATE,as well as the “batch” mode for an INSERT or UPDATE statement.The format of this dictionary is not guaranteed to stay thesame in future releases.

Note that the ORM makes use of its own “compiled” caches forsome operations, including flush operations. The cachingused by the ORM internally supersedes a cache dictionaryspecified here.

  1. -

isolation_level

Available on: Connection.Set the transaction isolation level forthe lifespan of this Connection object (not theunderlying DBAPI connection, for which the level is resetto its original setting upon termination of thisConnection object).

Valid values includethose string values accepted by thecreate_engine.isolation_levelparameter passed to create_engine(). These levels aresemi-database specific; see individual dialect documentation forvalid levels.

Note that this option necessarily affects the underlyingDBAPI connection for the lifespan of the originatingConnection, and is not per-execution. Thissetting is not removed until the underlying DBAPI connectionis returned to the connection pool, i.e.the Connection.close() method is called.

Warning

The isolation_level execution option shouldnot be used when a transaction is already established, thatis, the Connection.begin() method or similar has beencalled. A database cannot change the isolation level on atransaction in progress, and different DBAPIs and/orSQLAlchemy dialects may implicitly roll back or committhe transaction, or not affect the connection at all.

Changed in version 0.9.9: A warning is emitted when theisolation_level execution option is used after atransaction has been started with Connection.begin()or similar.

Note

The isolation_level execution option is implicitlyreset if the Connection is invalidated, e.g. viathe Connection.invalidate() method, or if adisconnection error occurs. The new connection produced afterthe invalidation will not have the isolation level re-appliedto it automatically.

See also

create_engine.isolation_level- set per Engine isolation level

Connection.get_isolation_level() - view current level

SQLite Transaction Isolation

PostgreSQL Transaction Isolation

MySQL Transaction Isolation

SQL Server Transaction Isolation

Setting Transaction Isolation Levels - for the ORM

  1. -

no_parameters – When True, if the final parameterlist or dictionary is totally empty, will invoke thestatement on the cursor as cursor.execute(statement),not passing the parameter collection at all.Some DBAPIs such as psycopg2 and mysql-python considerpercent signs as significant only when parameters arepresent; this option allows code to generate SQLcontaining percent signs (and possibly other characters)that is neutral regarding whether it’s executed by the DBAPIor piped into a script that’s later invoked bycommand line tools.

  1. -

stream_results – Available on: Connection, statement.Indicate to the dialect that results should be“streamed” and not pre-buffered, if possible. This is a limitationof many DBAPIs. The flag is currently understood only by thepsycopg2, mysqldb and pymysql dialects.

  1. -

schema_translate_map

Available on: Connection, Engine.A dictionary mapping schema names to schema names, that will beapplied to the Table.schema element of eachTable encountered when SQL or DDL expression elementsare compiled into strings; the resulting schema name will beconverted based on presence in the map of the original name.

New in version 1.1.

See also

Translation of Schema Names

See also

Engine.execution_options()

Executable.execution_options()

Connection.get_execution_options()

  • get_execution_options()
  • Get the non-SQL options which will take effect during execution.

New in version 1.3.

See also

Connection.execution_options()

  • get_isolation_level()
  • Return the current isolation level assigned to thisConnection.

This will typically be the default isolation level as determinedby the dialect, unless if theConnection.execution_options.isolation_levelfeature has been used to alter the isolation level on aper-Connection basis.

This attribute will typically perform a live SQL operation in orderto procure the current isolation level, so the value returned is theactual level on the underlying DBAPI connection regardless of howthis state was set. Compare to theConnection.default_isolation_level accessorwhich returns the dialect-level setting without performing a SQLquery.

New in version 0.9.9.

See also

Connection.default_isolation_level - view default level

create_engine.isolation_level- set per Engine isolation level

Connection.execution_options.isolation_level- set per Connection isolation level

  • in_transaction()
  • Return True if a transaction is in progress.

  • property info

  • Info dictionary associated with the underlying DBAPI connectionreferred to by this Connection, allowing user-defineddata to be associated with the connection.

The data here will follow along with the DBAPI connection includingafter it is returned to the connection pool and used againin subsequent instances of Connection.

  • invalidate(exception=None)
  • Invalidate the underlying DBAPI connection associated withthis Connection.

The underlying DBAPI connection is literally closed (ifpossible), and is discarded. Its source connection pool willtypically lazily create a new connection to replace it.

Upon the next use (where “use” typically means using theConnection.execute() method or similar),this Connection will attempt toprocure a new DBAPI connection using the services of thePool as a source of connectivity (e.g. a “reconnection”).

If a transaction was in progress (e.g. theConnection.begin() method has been called) whenConnection.invalidate() method is called, at the DBAPIlevel all state associated with this transaction is lost, asthe DBAPI connection is closed. The Connectionwill not allow a reconnection to proceed until theTransaction object is ended, by calling theTransaction.rollback() method; until that point, any attempt atcontinuing to use the Connection will raise anInvalidRequestError.This is to prevent applications from accidentallycontinuing an ongoing transactional operations despite thefact that the transaction has been lost due to aninvalidation.

The Connection.invalidate() method, just like auto-invalidation,will at the connection pool level invoke thePoolEvents.invalidate() event.

See also

More on Invalidation

  • property invalidated
  • Return True if this connection was invalidated.

  • runcallable(callable_, args, *kwargs_)

  • Given a callable object or function, execute it, passinga Connection as the first argument.

The given args and *kwargs are passed subsequentto the Connection argument.

This function, along with Engine.run_callable(),allows a function to be run with a Connectionor Engine object without the need to knowwhich one is being dealt with.

  • scalar(object__, multiparams, *params_)
  • Executes and returns the first column of the first row.

The underlying result/cursor is closed after execution.

  • schemafor_object = _
  • Return the “.schema” attribute for an object.

Used for Table, Sequence and similar objects,and takes into accountthe Connection.execution_options.schema_translate_mapparameter.

New in version 1.1.

  • transaction(callable__, args, *kwargs_)
  • Execute the given function within a transaction boundary.

The function is passed this Connectionas the first argument, followed by the given args and *kwargs,e.g.:

  1. def do_something(conn, x, y):
  2. conn.execute("some statement", {'x':x, 'y':y})
  3.  
  4. conn.transaction(do_something, 5, 10)

The operations inside the function are all invoked within thecontext of a single Transaction.Upon success, the transaction is committed. If anexception is raised, the transaction is rolled backbefore propagating the exception.

Note

The transaction() method is superseded bythe usage of the Python with: statement, which canbe used with Connection.begin():

  1. with conn.begin():
  2. conn.execute("some statement", {'x':5, 'y':10})

As well as with Engine.begin():

  1. with engine.begin() as conn:
  2. conn.execute("some statement", {'x':5, 'y':10})

See also

Engine.begin() - engine-level transactionalcontext

Engine.transaction() - engine-level version ofConnection.transaction()

  • class sqlalchemy.engine.Connectable
  • Interface for an object which supports execution of SQL constructs.

The two implementations of Connectable areConnection and Engine.

Connectable must also implement the ‘dialect’ member which references aDialect instance.

Depending on context, this may be self if this objectis already an instance of Connection, or a newlyprocured Connection if this object is an instanceof Engine.

  • contextualconnect(arg, *kw_)
  • Return a Connection object which may be part of an ongoingcontext.

Deprecated since version 1.3: The Engine.contextual_connect() and Connection.contextual_connect() methods are deprecated. This method is an artifact of the threadlocal engine strategy which is also to be deprecated. For explicit connections from an Engine, use the Engine.connect() method.

Depending on context, this may be self if this objectis already an instance of Connection, or a newlyprocured Connection if this object is an instanceof Engine.

  • create(entity, **kwargs)
  • Emit CREATE statements for the given schema entity.

Deprecated since version 0.7: The Connectable.create() method is deprecated and will be removed in a future release. Please use the .create() method on specific schema objects to emit DDL sequences, including Table.create(), Index.create(), and MetaData.create_all().

  • drop(entity, **kwargs)
  • Emit DROP statements for the given schema entity.

Deprecated since version 0.7: The Connectable.drop() method is deprecated and will be removed in a future release. Please use the .drop() method on specific schema objects to emit DDL sequences, including Table.drop(), Index.drop(), and MetaData.drop_all().

May be self if this is already an Engine.

  • execute(object__, multiparams, *params_)
  • Executes the given construct and returns a ResultProxy.

  • scalar(object__, multiparams, *params_)

  • Executes and returns the first column of the first row.

The underlying cursor is closed after execution.

  • class sqlalchemy.engine.CreateEnginePlugin(url, kwargs)
  • A set of hooks intended to augment the construction of anEngine object based on entrypoint names in a URL.

The purpose of CreateEnginePlugin is to allow third-partysystems to apply engine, pool and dialect level event listeners withoutthe need for the target application to be modified; instead, the pluginnames can be added to the database URL. Target applications forCreateEnginePlugin include:

  • connection and SQL performance tools, e.g. which use events to tracknumber of checkouts and/or time spent with statements

  • connectivity plugins such as proxies

Plugins are registered using entry points in a similar way as thatof dialects:

  1. entry_points={
  2. 'sqlalchemy.plugins': [
  3. 'myplugin = myapp.plugins:MyPlugin'
  4. ]

A plugin that uses the above names would be invoked from a databaseURL as in:

  1. from sqlalchemy import create_engine
  2.  
  3. engine = create_engine(
  4. "mysql+pymysql://scott:tiger@localhost/test?plugin=myplugin")

Alternatively, the create_engine.plugins" argument may bepassed as a list to :func:.create_engine`:

  1. engine = create_engine(
  2. "mysql+pymysql://scott:tiger@localhost/test",
  3. plugins=["myplugin"])

New in version 1.2.3: plugin names can also be specifiedto create_engine() as a list

The plugin argument supports multiple instances, so that a URLmay specify multiple plugins; they are loaded in the order statedin the URL:

  1. engine = create_engine(
  2. "mysql+pymysql://scott:tiger@localhost/"
  3. "test?plugin=plugin_one&plugin=plugin_twp&plugin=plugin_three")

A plugin can receive additional arguments from the URL string aswell as from the keyword arguments passed to create_engine().The URL object and the keyword dictionary are passed to theconstructor so that these arguments can be extracted from the url’sURL.query collection as well as from the dictionary:

  1. class MyPlugin(CreateEnginePlugin):
  2. def __init__(self, url, kwargs):
  3. self.my_argument_one = url.query.pop('my_argument_one')
  4. self.my_argument_two = url.query.pop('my_argument_two')
  5. self.my_argument_three = kwargs.pop('my_argument_three', None)

Arguments like those illustrated above would be consumed from thefollowing:

  1. from sqlalchemy import create_engine
  2.  
  3. engine = create_engine(
  4. "mysql+pymysql://scott:tiger@localhost/"
  5. "test?plugin=myplugin&my_argument_one=foo&my_argument_two=bar",
  6. my_argument_three='bat')

The URL and dictionary are used for subsequent setup of the engineas they are, so the plugin can modify their arguments in-place.Arguments that are only understood by the plugin should be poppedor otherwise removed so that they aren’t interpreted as erroneousarguments afterwards.

When the engine creation process completes and produces theEngine object, it is again passed to the plugin via theCreateEnginePlugin.engine_created() hook. In this hook, additionalchanges can be made to the engine, most typically involving setup ofevents (e.g. those defined in Core Events).

New in version 1.1.

The plugin object is instantiated individually for each callto create_engine(). A single Engine will bepassed to the CreateEnginePlugin.engine_created() methodcorresponding to this URL.

  1. - Parameters
  2. -
  3. -

url – the URL object. The plugin should inspectwhat it needs here as well as remove its custom arguments from theURL.query collection. The URL can be modified in-placein any other way as well.

  1. -

kwargs – The keyword arguments passed to :func.create_engine.The plugin can read and modify this dictionary in-place, to affectthe ultimate arguments used to create the engine. It shouldremove its custom arguments from the dictionary as well.

  • enginecreated(_engine)
  • Receive the Engine object when it is fully constructed.

The plugin may make additional changes to the engine, such asregistering engine or connection pool events.

  • handledialect_kwargs(_dialect_cls, dialect_args)
  • parse and modify dialect kwargs

  • handlepool_kwargs(_pool_cls, pool_args)

  • parse and modify pool kwargs

Connects a Pool andDialect together to provide asource of database connectivity and behavior.

An Engine object is instantiated publicly using thecreate_engine() function.

See also

Engine Configuration

Working with Engines and Connections

  • begin(close_with_result=False)
  • Return a context manager delivering a Connectionwith a Transaction established.

E.g.:

  1. with engine.begin() as conn:
  2. conn.execute("insert into table (x, y, z) values (1, 2, 3)")
  3. conn.execute("my_special_procedure(5)")

Upon successful operation, the Transactionis committed. If an error is raised, the Transactionis rolled back.

The close_with_result flag is normally False, and indicatesthat the Connection will be closed when the operationis complete. When set to True, it indicates theConnection is in “single use” mode, where theResultProxy returned by the first call toConnection.execute() will close the Connection whenthat ResultProxy has exhausted all result rows.

See also

Engine.connect() - procure a Connection froman Engine.

Connection.begin() - start a Transactionfor a particular Connection.

The Connection object is a facade that uses a DBAPIconnection internally in order to communicate with the database. Thisconnection is procured from the connection-holding Poolreferenced by this Engine. When theclose() method of the Connection objectis called, the underlying DBAPI connection is then returned to theconnection pool, where it may be used again in a subsequent call toconnect().

  • contextualconnect(_close_with_result=False, **kwargs)
  • Return a Connection object which may be part of someongoing context.

Deprecated since version 1.3: The Engine.contextual_connect() method is deprecated. This method is an artifact of the threadlocal engine strategy which is also to be deprecated. For explicit connections from an Engine, use the Engine.connect() method.

By default, this method does the same thing as Engine.connect().Subclasses of Engine may override this methodto provide contextual behavior.

  1. - Parameters
  2. -

close_with_result – When True, the first ResultProxycreated by the Connection will call theConnection.close() method of that connection as soon as anypending result rows are exhausted. This is used to supply the“connectionless execution” behavior provided by theEngine.execute() method.

  • dispose()
  • Dispose of the connection pool used by this Engine.

This has the effect of fully closing all currently checked indatabase connections. Connections that are still checked outwill not be closed, however they will no longer be associatedwith this Engine, so when they are closed individually,eventually the Pool which they are associated with willbe garbage collected and they will be closed out fully, ifnot already closed on checkin.

A new connection pool is created immediately after the old one hasbeen disposed. This new pool, like all SQLAlchemy connection pools,does not make any actual connections to the database until one isfirst requested, so as long as the Engine isn’t used again,no new connections will be made.

See also

Engine Disposal

  • property driver
  • Driver name of the Dialectin use by this Engine.

  • execute(statement, *multiparams, **params)

  • Executes the given construct and returns a ResultProxy.

The arguments are the same as those used byConnection.execute().

Here, a Connection is acquired using thecontextual_connect() method, and the statement executedwith that connection. The returned ResultProxy is flaggedsuch that when the ResultProxy is exhausted and itsunderlying cursor is closed, the Connection created herewill also be closed, which allows its associated DBAPI connectionresource to be returned to the connection pool.

  • executionoptions(**opt_)
  • Return a new Engine that will provideConnection objects with the given execution options.

The returned Engine remains related to the originalEngine in that it shares the same connection pool andother state:

  1. -

The Pool used by the new Engine is thesame instance. The Engine.dispose() method will replacethe connection pool instance for the parent engine as wellas this one.

  1. -

Event listeners are “cascaded” - meaning, the new Engineinherits the events of the parent, and new events can be associatedwith the new Engine individually.

  1. -

The logging configuration and logging_name is copied from the parentEngine.

The intent of the Engine.execution_options() method isto implement “sharding” schemes where multiple Engineobjects refer to the same connection pool, but are differentiatedby options that would be consumed by a custom event:

  1. primary_engine = create_engine("mysql://")
  2. shard1 = primary_engine.execution_options(shard_id="shard1")
  3. shard2 = primary_engine.execution_options(shard_id="shard2")

Above, the shard1 engine serves as a factory forConnection objects that will contain the execution optionshard_id=shard1, and shard2 will produce Connectionobjects that contain the execution option shard_id=shard2.

An event handler can consume the above execution option to performa schema switch or other operation, given a connection. Belowwe emit a MySQL use statement to switch databases, at the sametime keeping track of which database we’ve established using theConnection.info dictionary, which gives us a persistentstorage space that follows the DBAPI connection:

  1. from sqlalchemy import event
  2. from sqlalchemy.engine import Engine
  3.  
  4. shards = {"default": "base", shard_1: "db1", "shard_2": "db2"}
  5.  
  6. @event.listens_for(Engine, "before_cursor_execute")
  7. def _switch_shard(conn, cursor, stmt,
  8. params, context, executemany):
  9. shard_id = conn._execution_options.get('shard_id', "default")
  10. current_shard = conn.info.get("current_shard", None)
  11.  
  12. if current_shard != shard_id:
  13. cursor.execute("use %s" % shards[shard_id])
  14. conn.info["current_shard"] = shard_id

See also

Connection.execution_options() - update execution optionson a Connection object.

Engine.update_execution_options() - update the executionoptions for a given Engine in place.

Engine.get_execution_options()

  • get_execution_options()
  • Get the non-SQL options which will take effect during execution.

See also

Engine.execution_options()

  • hastable(_table_name, schema=None)
  • Return True if the given backend has a table of the given name.

See also

Fine Grained Reflection with Inspector - detailed schema inspectionusing the Inspector interface.

quoted_name - used to pass quoting information alongwith a schema identifier.

  • property name
  • String name of the Dialectin use by this Engine.

  • rawconnection(__connection=None)

  • Return a “raw” DBAPI connection from the connection pool.

The returned object is a proxied version of the DBAPIconnection object used by the underlying driver in use.The object will have all the same behavior as the real DBAPIconnection, except that its close() method will result in theconnection being returned to the pool, rather than being closedfor real.

This method provides direct DBAPI connection access forspecial situations when the API provided by Connectionis not needed. When a Connection object is alreadypresent, the DBAPI connection is available usingthe Connection.connection accessor.

See also

Working with Raw DBAPI Connections

  • runcallable(callable_, args, *kwargs_)
  • Given a callable object or function, execute it, passinga Connection as the first argument.

The given args and *kwargs are passed subsequentto the Connection argument.

This function, along with Connection.run_callable(),allows a function to be run with a Connectionor Engine object without the need to knowwhich one is being dealt with.

  • scalar(statement, *multiparams, **params)
  • Executes and returns the first column of the first row.

The underlying cursor is closed after execution.

  • schemafor_object = _
  • Return the “.schema” attribute for an object.

Used for Table, Sequence and similar objects,and takes into accountthe Connection.execution_options.schema_translate_mapparameter.

New in version 1.1.

  • tablenames(_schema=None, connection=None)
  • Return a list of all table names available in the database.

    • Parameters
      • schema – Optional, retrieve names from a non-default schema.

      • connection – Optional, use a specified connection. Default isthe contextual_connect for this Engine.

  • transaction(callable__, args, *kwargs_)

  • Execute the given function within a transaction boundary.

The function is passed a Connection newly procuredfrom Engine.contextual_connect() as the first argument,followed by the given args and *kwargs.

e.g.:

  1. def do_something(conn, x, y):
  2. conn.execute("some statement", {'x':x, 'y':y})
  3.  
  4. engine.transaction(do_something, 5, 10)

The operations inside the function are all invoked within thecontext of a single Transaction.Upon success, the transaction is committed. If anexception is raised, the transaction is rolled backbefore propagating the exception.

Note

The transaction() method is superseded bythe usage of the Python with: statement, which canbe used with Engine.begin():

  1. with engine.begin() as conn:
  2. conn.execute("some statement", {'x':5, 'y':10})

See also

Engine.begin() - engine-level transactionalcontext

Connection.transaction() - connection-level version ofEngine.transaction()

  • updateexecution_options(**opt_)
  • Update the default execution_options dictionaryof this Engine.

The given keys/values in **opt are added to thedefault execution options that will be used forall connections. The initial contents of this dictionarycan be sent via the execution_options parameterto create_engine().

See also

Connection.execution_options()

Engine.execution_options()

  • class sqlalchemy.engine.ExceptionContext
  • Encapsulate information about an error condition in progress.

This object exists solely to be passed to theConnectionEvents.handle_error() event, supporting an interface thatcan be extended without backwards-incompatibility.

New in version 0.9.7.

  • chainedexception = None_
  • The exception that was returned by the previous handler in theexception chain, if any.

If present, this exception will be the one ultimately raised bySQLAlchemy unless a subsequent handler replaces it.

May be None.

  • connection = None
  • The Connection in use during the exception.

This member is present, except in the case of a failure whenfirst connecting.

See also

ExceptionContext.engine

  • cursor = None
  • The DBAPI cursor object.

May be None.

  • engine = None
  • The Engine in use during the exception.

This member should always be present, even in the case of a failurewhen first connecting.

New in version 1.0.0.

  • executioncontext = None_
  • The ExecutionContext corresponding to the executionoperation in progress.

This is present for statement execution operations, but not foroperations such as transaction begin/end. It also is not present whenthe exception was raised before the ExecutionContextcould be constructed.

Note that the ExceptionContext.statement andExceptionContext.parameters members may represent adifferent value than that of the ExecutionContext,potentially in the case where aConnectionEvents.before_cursor_execute() event or similarmodified the statement/parameters to be sent.

May be None.

  • invalidatepool_on_disconnect = True_
  • Represent whether all connections in the pool should be invalidatedwhen a “disconnect” condition is in effect.

Setting this flag to False within the scope of theConnectionEvents.handle_error() event will have the effect suchthat the full collection of connections in the pool will not beinvalidated during a disconnect; only the current connection that is thesubject of the error will actually be invalidated.

The purpose of this flag is for custom disconnect-handling schemes wherethe invalidation of other connections in the pool is to be performedbased on other conditions, or even on a per-connection basis.

New in version 1.0.3.

  • isdisconnect = None_
  • Represent whether the exception as occurred represents a “disconnect”condition.

This flag will always be True or False within the scope of theConnectionEvents.handle_error() handler.

SQLAlchemy will defer to this flag in order to determine whether or notthe connection should be invalidated subsequently. That is, byassigning to this flag, a “disconnect” event which then results ina connection and pool invalidation can be invoked or prevented bychanging this flag.

  • originalexception = None_
  • The exception object which was caught.

This member is always present.

  • parameters = None
  • Parameter collection that was emitted directly to the DBAPI.

May be None.

  • sqlalchemyexception = None_
  • The sqlalchemy.exc.StatementError which wraps the original,and will be raised if exception handling is not circumvented by the event.

May be None, as not all exception types are wrapped by SQLAlchemy.For DBAPI-level exceptions that subclass the dbapi’s Error class, thisfield will always be present.

  • statement = None
  • String SQL statement that was emitted directly to the DBAPI.

May be None.

Represent a ‘nested’, or SAVEPOINT transaction.

A new NestedTransaction object may be procuredusing the Connection.begin_nested() method.

The interface is the same as that of Transaction.

  • class sqlalchemy.engine.ResultProxy(context)
  • Wraps a DB-API cursor object to provide easier access to row columns.

Individual columns may be accessed by their integer position,case-insensitive column name, or by schema.Columnobject. e.g.:

  1. row = fetchone()
  2.  
  3. col1 = row[0] # access via integer position
  4.  
  5. col2 = row['col2'] # access via name
  6.  
  7. col3 = row[mytable.c.mycol] # access via Column object.

ResultProxy also handles post-processing of result columndata using TypeEngine objects, which are referenced fromthe originating SQL statement that produced this result set.

  • _cursor_description()
  • May be overridden by subclasses.

  • _process_row

  • alias of RowProxy

  • _soft_close()

  • Soft close this ResultProxy.

This releases all DBAPI cursor resources, but leaves theResultProxy “open” from a semantic perspective, meaning thefetchXXX() methods will continue to return empty results.

This method is called automatically when:

  1. -

all result rows are exhausted using the fetchXXX() methods.

  1. -

cursor.description is None.

This method is not public, but is documented in order to clarifythe “autoclose” process used.

New in version 1.0.0.

See also

ResultProxy.close()

  • close()
  • Close this ResultProxy.

This closes out the underlying DBAPI cursor correspondingto the statement execution, if one is still present. Note that theDBAPI cursor is automatically released when the ResultProxyexhausts all available rows. ResultProxy.close() is generallyan optional method except in the case when discarding aResultProxy that still has additional rows pending for fetch.

In the case of a result that is the product ofconnectionless execution,the underlying Connection object is also closed, whichreleases DBAPI connection resources.

After this method is called, it is no longer valid to call uponthe fetch methods, which will raise a ResourceClosedErroron subsequent use.

Changed in version 1.0.0: - the ResultProxy.close() methodhas been separated out from the process that releases the underlyingDBAPI cursor resource. The “auto close” feature of theConnection now performs a so-called “soft close”, whichreleases the underlying DBAPI cursor, but allows theResultProxy to still behave as an open-but-exhaustedresult set; the actual ResultProxy.close() method is nevercalled. It is still safe to discard a ResultProxythat has been fully exhausted without calling this method.

See also

Working with Engines and Connections

ResultProxy._soft_close()

  • fetchall()
  • Fetch all rows, just like DB-API cursor.fetchall().

After all rows have been exhausted, the underlying DBAPIcursor resource is released, and the object may be safelydiscarded.

Subsequent calls to ResultProxy.fetchall() will returnan empty list. After the ResultProxy.close() method iscalled, the method will raise ResourceClosedError.

Changed in version 1.0.0: - Added “soft close” behavior whichallows the result to be used in an “exhausted” state prior tocalling the ResultProxy.close() method.

  • fetchmany(size=None)
  • Fetch many rows, just like DB-APIcursor.fetchmany(size=cursor.arraysize).

After all rows have been exhausted, the underlying DBAPIcursor resource is released, and the object may be safelydiscarded.

Calls to ResultProxy.fetchmany() after all rows have beenexhausted will returnan empty list. After the ResultProxy.close() method iscalled, the method will raise ResourceClosedError.

Changed in version 1.0.0: - Added “soft close” behavior whichallows the result to be used in an “exhausted” state prior tocalling the ResultProxy.close() method.

  • fetchone()
  • Fetch one row, just like DB-API cursor.fetchone().

After all rows have been exhausted, the underlying DBAPIcursor resource is released, and the object may be safelydiscarded.

Calls to ResultProxy.fetchone() after all rows havebeen exhausted will return None.After the ResultProxy.close() method iscalled, the method will raise ResourceClosedError.

Changed in version 1.0.0: - Added “soft close” behavior whichallows the result to be used in an “exhausted” state prior tocalling the ResultProxy.close() method.

  • first()
  • Fetch the first row and then close the result set unconditionally.

Returns None if no row is present.

After calling this method, the object is fully closed,e.g. the ResultProxy.close() method will have been called.

  • inserted_primary_key
  • Return the primary key for the row just inserted.

The return value is a list of scalar valuescorresponding to the list of primary key columnsin the target table.

This only applies to single row insert()constructs which did not explicitly specifyInsert.returning().

Note that primary key columns which specify aserver_default clause,or otherwise do not qualify as “autoincrement”columns (see the notes at Column), and weregenerated using the database-side default, willappear in this list as None unless the backendsupports “returning” and the insert statement executedwith the “implicit returning” enabled.

Raises InvalidRequestError if the executedstatement is not a compiled expression constructor is not an insert() construct.

When True, this implies that theinserted_primary_key attribute is accessible,assuming the statement did not includea user defined “returning” construct.

  • keys()
  • Return the current set of string keys for rows.

  • last_inserted_params()

  • Return the collection of inserted parameters from thisexecution.

Raises InvalidRequestError if the executedstatement is not a compiled expression constructor is not an insert() construct.

  • last_updated_params()
  • Return the collection of updated parameters from thisexecution.

Raises InvalidRequestError if the executedstatement is not a compiled expression constructor is not an update() construct.

  • lastrow_has_defaults()
  • Return lastrow_has_defaults() from the underlyingExecutionContext.

See ExecutionContext for details.

  • property lastrowid
  • return the ‘lastrowid’ accessor on the DBAPI cursor.

This is a DBAPI specific method and is only functionalfor those backends which support it, for statementswhere it is appropriate. It’s behavior is notconsistent across backends.

Usage of this method is normally unnecessary whenusing insert() expression constructs; theinserted_primary_key attribute provides atuple of primary key values for a newly inserted row,regardless of database backend.

  • next()
  • Implement the next() protocol.

New in version 1.2.

See ExecutionContext for details.

Raises InvalidRequestError if the executedstatement is not a compiled expression constructor is not an insert() or update() construct.

See ExecutionContext for details.

Raises InvalidRequestError if the executedstatement is not a compiled expression constructor is not an insert() or update() construct.

The value is an instance of RowProxy, or Noneif ValuesBase.return_defaults() was not used or if thebackend does not support RETURNING.

New in version 0.9.0.

See also

ValuesBase.return_defaults()

  • property returns_rows
  • True if this ResultProxy returns rows.

I.e. if it is legal to call the methodsfetchone(),fetchmany()fetchall().

  • rowcount
  • Return the ‘rowcount’ for this result.

The ‘rowcount’ reports the number of rows _matched_by the WHERE criterion of an UPDATE or DELETE statement.

Note

Notes regarding ResultProxy.rowcount:

  1. -

This attribute returns the number of rows matched,which is not necessarily the same as the number of rowsthat were actually modified - an UPDATE statement, for example,may have no net change on a given row if the SET valuesgiven are the same as those present in the row already.Such a row would be matched but not modified.On backends that feature both styles, such as MySQL,rowcount is configured by default to return the matchcount in all cases.

  1. -

ResultProxy.rowcount is only useful in conjunctionwith an UPDATE or DELETE statement. Contrary to what the PythonDBAPI says, it does not return thenumber of rows available from the results of a SELECT statementas DBAPIs cannot support this functionality when rows areunbuffered.

  1. -

ResultProxy.rowcount may not be fully implemented byall dialects. In particular, most DBAPIs do not support anaggregate rowcount result from an executemany call.The ResultProxy.supports_sane_rowcount() andResultProxy.supports_sane_multi_rowcount() methodswill report from the dialect if each usage is known to besupported.

  1. -

Statements that use RETURNING may not return a correctrowcount.

  • scalar()
  • Fetch the first column of the first row, and close the result set.

Returns None if no row is present.

After calling this method, the object is fully closed,e.g. the ResultProxy.close() method will have been called.

  • supports_sane_multi_rowcount()
  • Return supports_sane_multi_rowcount from the dialect.

See ResultProxy.rowcount for background.

  • supports_sane_rowcount()
  • Return supports_sane_rowcount from the dialect.

See ResultProxy.rowcount for background.

  • class sqlalchemy.engine.RowProxy(parent, row, processors, keymap)
  • Bases: sqlalchemy.engine.BaseRowProxy

Proxy values from a single cursor row.

Mostly follows “ordered dictionary” behavior, mapping resultvalues to the string-based column name, the integer position ofthe result in the row, as well as Column instances which can bemapped to the original Columns that produced this result set (forresults that correspond to constructed SQL expressions).

  • haskey(_key)
  • Return True if this RowProxy contains the given key.

  • items()

  • Return a list of tuples, each tuple containing a key/value pair.

  • keys()

  • Return the list of keys as strings represented by this RowProxy.
  • class sqlalchemy.engine.Transaction(connection, parent)
  • Represent a database transaction in progress.

The Transaction object is procured bycalling the begin() method ofConnection:

  1. from sqlalchemy import create_engine
  2. engine = create_engine("postgresql://scott:tiger@localhost/test")
  3. connection = engine.connect()
  4. trans = connection.begin()
  5. connection.execute("insert into x (a, b) values (1, 2)")
  6. trans.commit()

The object provides rollback() and commit()methods in order to control transaction boundaries. Italso implements a context manager interface so thatthe Python with statement can be used with theConnection.begin() method:

  1. with connection.begin():
  2. connection.execute("insert into x (a, b) values (1, 2)")

The Transaction object is not threadsafe.

See also

Connection.begin()

Connection.begin_twophase()

Connection.begin_nested()

If this transaction is the base transaction in a begin/commitnesting, the transaction will rollback(). Otherwise, themethod returns.

This is used to cancel a Transaction without affecting the scope ofan enclosing transaction.

Represent a two-phase transaction.

A new TwoPhaseTransaction object may be procuredusing the Connection.begin_twophase() method.

The interface is the same as that of Transactionwith the addition of the prepare() method.

After a PREPARE, the transaction can be committed.