PostgreSQL

Support for the PostgreSQL database.

DBAPI Support

The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.

Sequences/SERIAL/IDENTITY

PostgreSQL supports sequences, and SQLAlchemy uses these as the default meansof creating new primary key values for integer-based primary key columns. Whencreating tables, SQLAlchemy will issue the SERIAL datatype forinteger-based primary key columns, which generates a sequence and server sidedefault corresponding to the column.

To specify a specific named sequence to be used for primary key generation,use the Sequence() construct:

  1. Table('sometable', metadata,
  2. Column('id', Integer, Sequence('some_id_seq'), primary_key=True)
  3. )

When SQLAlchemy issues a single INSERT statement, to fulfill the contract ofhaving the “last insert identifier” available, a RETURNING clause is added tothe INSERT statement which specifies the primary key columns should bereturned after the statement completes. The RETURNING functionality only takesplace if PostgreSQL 8.2 or later is in use. As a fallback approach, thesequence, whether specified explicitly or implicitly via SERIAL, isexecuted independently beforehand, the returned value to be used in thesubsequent insert. Note that when aninsert() construct is executed using“executemany” semantics, the “last inserted identifier” functionality does notapply; no RETURNING clause is emitted nor is the sequence pre-executed in thiscase.

To force the usage of RETURNING by default off, specify the flagimplicit_returning=False to create_engine().

PostgreSQL 10 IDENTITY columns

PostgreSQL 10 has a new IDENTITY feature that supersedes the use of SERIAL.Built-in support for rendering of IDENTITY is not available yet, however thefollowing compilation hook may be used to replace occurrences of SERIAL withIDENTITY:

  1. from sqlalchemy.schema import CreateColumn
  2. from sqlalchemy.ext.compiler import compiles
  3.  
  4.  
  5. @compiles(CreateColumn, 'postgresql')
  6. def use_identity(element, compiler, **kw):
  7. text = compiler.visit_create_column(element, **kw)
  8. text = text.replace("SERIAL", "INT GENERATED BY DEFAULT AS IDENTITY")
  9. return text

Using the above, a table such as:

  1. t = Table(
  2. 't', m,
  3. Column('id', Integer, primary_key=True),
  4. Column('data', String)
  5. )

Will generate on the backing database as:

  1. CREATE TABLE t (
  2. id INT GENERATED BY DEFAULT AS IDENTITY NOT NULL,
  3. data VARCHAR,
  4. PRIMARY KEY (id)
  5. )

Transaction Isolation Level

All PostgreSQL dialects support setting of transaction isolation levelboth via a dialect-specific parametercreate_engine.isolation_level accepted by create_engine(),as well as the Connection.execution_options.isolation_levelargument as passed to Connection.execution_options().When using a non-psycopg2 dialect, this feature works by issuing the commandSET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL <level> foreach new connection. For the special AUTOCOMMIT isolation level,DBAPI-specific techniques are used.

To set isolation level using create_engine():

  1. engine = create_engine(
  2. "postgresql+pg8000://scott:tiger@localhost/test",
  3. isolation_level="READ UNCOMMITTED"
  4. )

To set using per-connection execution options:

  1. connection = engine.connect()
  2. connection = connection.execution_options(
  3. isolation_level="READ COMMITTED"
  4. )

Valid values for isolation_level include:

  • READ COMMITTED

  • READ UNCOMMITTED

  • REPEATABLE READ

  • SERIALIZABLE

  • AUTOCOMMIT - on psycopg2 / pg8000 only

See also

Psycopg2 Transaction Isolation Level

pg8000 Transaction Isolation Level

Remote-Schema Table Introspection and PostgreSQL search_path

TL;DR;: keep the search_path variable set to its default of public,name schemas other than public explicitly within Table definitions.

The PostgreSQL dialect can reflect tables from any schema. TheTable.schema argument, or alternatively theMetaData.reflect.schema argument determines which schema willbe searched for the table or tables. The reflected Table objectswill in all cases retain this .schema attribute as was specified.However, with regards to tables which these Table objects refer tovia foreign key constraint, a decision must be made as to how the .schemais represented in those remote tables, in the case where that remoteschema name is also a member of the currentPostgreSQL search path.

By default, the PostgreSQL dialect mimics the behavior encouraged byPostgreSQL’s own pg_get_constraintdef() builtin procedure. This functionreturns a sample definition for a particular foreign key constraint,omitting the referenced schema name from that definition when the name isalso in the PostgreSQL schema search path. The interaction belowillustrates this behavior:

  1. test=> CREATE TABLE test_schema.referred(id INTEGER PRIMARY KEY);
  2. CREATE TABLE
  3. test=> CREATE TABLE referring(
  4. test(> id INTEGER PRIMARY KEY,
  5. test(> referred_id INTEGER REFERENCES test_schema.referred(id));
  6. CREATE TABLE
  7. test=> SET search_path TO public, test_schema;
  8. test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
  9. test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
  10. test-> ON n.oid = c.relnamespace
  11. test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
  12. test-> WHERE c.relname='referring' AND r.contype = 'f'
  13. test-> ;
  14. pg_get_constraintdef
  15. ---------------------------------------------------
  16. FOREIGN KEY (referred_id) REFERENCES referred(id)
  17. (1 row)

Above, we created a table referred as a member of the remote schematest_schema, however when we added test_schema to thePG search_path and then asked pg_get_constraintdef() for theFOREIGN KEY syntax, test_schema was not included in the output ofthe function.

On the other hand, if we set the search path back to the typical defaultof public:

  1. test=> SET search_path TO public;
  2. SET

The same query against pg_get_constraintdef() now returns the fullyschema-qualified name for us:

  1. test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
  2. test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
  3. test-> ON n.oid = c.relnamespace
  4. test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
  5. test-> WHERE c.relname='referring' AND r.contype = 'f';
  6. pg_get_constraintdef
  7. ---------------------------------------------------------------
  8. FOREIGN KEY (referred_id) REFERENCES test_schema.referred(id)
  9. (1 row)

SQLAlchemy will by default use the return value of pg_get_constraintdef()in order to determine the remote schema name. That is, if our search_pathwere set to include test_schema, and we invoked a tablereflection process as follows:

  1. >>> from sqlalchemy import Table, MetaData, create_engine
  2. >>> engine = create_engine("postgresql://scott:tiger@localhost/test")
  3. >>> with engine.connect() as conn:
  4. ... conn.execute("SET search_path TO test_schema, public")
  5. ... meta = MetaData()
  6. ... referring = Table('referring', meta,
  7. ... autoload=True, autoload_with=conn)
  8. ...
  9. <sqlalchemy.engine.result.ResultProxy object at 0x101612ed0>

The above process would deliver to the MetaData.tables collectionreferred table named without the schema:

  1. >>> meta.tables['referred'].schema is None
  2. True

To alter the behavior of reflection such that the referred schema ismaintained regardless of the search_path setting, use thepostgresql_ignore_search_path option, which can be specified as adialect-specific argument to both Table as well asMetaData.reflect():

  1. >>> with engine.connect() as conn:
  2. ... conn.execute("SET search_path TO test_schema, public")
  3. ... meta = MetaData()
  4. ... referring = Table('referring', meta, autoload=True,
  5. ... autoload_with=conn,
  6. ... postgresql_ignore_search_path=True)
  7. ...
  8. <sqlalchemy.engine.result.ResultProxy object at 0x1016126d0>

We will now have test_schema.referred stored as schema-qualified:

  1. >>> meta.tables['test_schema.referred'].schema
  2. 'test_schema'

Best Practices for PostgreSQL Schema reflection

The description of PostgreSQL schema reflection behavior is complex, andis the product of many years of dealing with widely varied use cases anduser preferences. But in fact, there’s no need to understand any of it ifyou just stick to the simplest use pattern: leave the search_path setto its default of public only, never refer to the name public asan explicit schema name otherwise, and refer to all other schema namesexplicitly when building up a Table object. The optionsdescribed here are only for those users who can’t, or prefer not to, staywithin these guidelines.

Note that in all cases, the “default” schema is always reflected asNone. The “default” schema on PostgreSQL is that which is returned by thePostgreSQL current_schema() function. On a typical PostgreSQLinstallation, this is the name public. So a table that refers to anotherwhich is in the public (i.e. default) schema will always have the.schema attribute set to None.

New in version 0.9.2: Added the postgresql_ignore_search_pathdialect-level option accepted by Table andMetaData.reflect().

See also

The Schema Search Path- on the PostgreSQL website.

INSERT/UPDATE…RETURNING

The dialect supports PG 8.2’s INSERT..RETURNING, UPDATE..RETURNING andDELETE..RETURNING syntaxes. INSERT..RETURNING is used by defaultfor single-row INSERT statements in order to fetch newly generatedprimary key identifiers. To specify an explicit RETURNING clause,use the _UpdateBase.returning() method on a per-statement basis:

  1. # INSERT..RETURNING
  2. result = table.insert().returning(table.c.col1, table.c.col2).\
  3. values(name='foo')
  4. print result.fetchall()
  5.  
  6. # UPDATE..RETURNING
  7. result = table.update().returning(table.c.col1, table.c.col2).\
  8. where(table.c.name=='foo').values(name='bar')
  9. print result.fetchall()
  10.  
  11. # DELETE..RETURNING
  12. result = table.delete().returning(table.c.col1, table.c.col2).\
  13. where(table.c.name=='foo')
  14. print result.fetchall()

INSERT…ON CONFLICT (Upsert)

Starting with version 9.5, PostgreSQL allows “upserts” (update or insert) ofrows into a table via the ON CONFLICT clause of the INSERT statement. Acandidate row will only be inserted if that row does not violate any uniqueconstraints. In the case of a unique constraint violation, a secondary actioncan occur which can be either “DO UPDATE”, indicating that the data in thetarget row should be updated, or “DO NOTHING”, which indicates to silently skipthis row.

Conflicts are determined using existing unique constraints and indexes. Theseconstraints may be identified either using their name as stated in DDL,or they may be inferred by stating the columns and conditions that comprisethe indexes.

SQLAlchemy provides ON CONFLICT support via the PostgreSQL-specificpostgresql.dml.insert() function, which providesthe generative methods on_conflict_do_update()and on_conflict_do_nothing():

  1. from sqlalchemy.dialects.postgresql import insert
  2.  
  3. insert_stmt = insert(my_table).values(
  4. id='some_existing_id',
  5. data='inserted value')
  6.  
  7. do_nothing_stmt = insert_stmt.on_conflict_do_nothing(
  8. index_elements=['id']
  9. )
  10.  
  11. conn.execute(do_nothing_stmt)
  12.  
  13. do_update_stmt = insert_stmt.on_conflict_do_update(
  14. constraint='pk_my_table',
  15. set_=dict(data='updated value')
  16. )
  17.  
  18. conn.execute(do_update_stmt)

Both methods supply the “target” of the conflict using either thenamed constraint or by column inference:

  1. do_update_stmt = insert_stmt.on_conflict_do_update(
  2. index_elements=['id'],
  3. set_=dict(data='updated value')
  4. )
  5.  
  6. do_update_stmt = insert_stmt.on_conflict_do_update(
  7. index_elements=[my_table.c.id],
  8. set_=dict(data='updated value')
  9. )
  1. from sqlalchemy.dialects.postgresql import insert
  2.  
  3. stmt = insert(my_table).values(user_email='a@b.com', data='inserted data')
  4. stmt = stmt.on_conflict_do_update(
  5. index_elements=[my_table.c.user_email],
  6. index_where=my_table.c.user_email.like('%@gmail.com'),
  7. set_=dict(data=stmt.excluded.data)
  8. )
  9. conn.execute(stmt)
  • The Insert.on_conflict_do_update.constraint argument isused to specify an index directly rather than inferring it. This can bethe name of a UNIQUE constraint, a PRIMARY KEY constraint, or an INDEX:
  1. do_update_stmt = insert_stmt.on_conflict_do_update(
  2. constraint='my_table_idx_1',
  3. set_=dict(data='updated value')
  4. )
  5.  
  6. do_update_stmt = insert_stmt.on_conflict_do_update(
  7. constraint='my_table_pk',
  8. set_=dict(data='updated value')
  9. )
  1. do_update_stmt = insert_stmt.on_conflict_do_update(
  2. constraint=my_table.primary_key,
  3. set_=dict(data='updated value')
  4. )

ON CONFLICT…DO UPDATE is used to perform an update of the alreadyexisting row, using any combination of new values as well as valuesfrom the proposed insertion. These values are specified using theInsert.onconflict_do_update.set parameter. Thisparameter accepts a dictionary which consists of direct valuesfor UPDATE:

  1. from sqlalchemy.dialects.postgresql import insert
  2.  
  3. stmt = insert(my_table).values(id='some_id', data='inserted value')
  4. do_update_stmt = stmt.on_conflict_do_update(
  5. index_elements=['id'],
  6. set_=dict(data='updated value')
  7. )
  8. conn.execute(do_update_stmt)

Warning

The Insert.on_conflict_do_update() method does not take intoaccount Python-side default UPDATE values or generation functions, e.g.e.g. those specified using Column.onupdate.These values will not be exercised for an ON CONFLICT style of UPDATE,unless they are manually specified in theInsert.onconflict_do_update.set dictionary.

In order to refer to the proposed insertion row, the special aliasexcluded is available as an attribute onthe postgresql.dml.Insert object; this object is aColumnCollection which alias contains all columns of the targettable:

  1. from sqlalchemy.dialects.postgresql import insert
  2.  
  3. stmt = insert(my_table).values(
  4. id='some_id',
  5. data='inserted value',
  6. author='jlh')
  7. do_update_stmt = stmt.on_conflict_do_update(
  8. index_elements=['id'],
  9. set_=dict(data='updated value', author=stmt.excluded.author)
  10. )
  11. conn.execute(do_update_stmt)

The Insert.on_conflict_do_update() method also acceptsa WHERE clause using the Insert.on_conflict_do_update.whereparameter, which will limit those rows which receive an UPDATE:

  1. from sqlalchemy.dialects.postgresql import insert
  2.  
  3. stmt = insert(my_table).values(
  4. id='some_id',
  5. data='inserted value',
  6. author='jlh')
  7. on_update_stmt = stmt.on_conflict_do_update(
  8. index_elements=['id'],
  9. set_=dict(data='updated value', author=stmt.excluded.author)
  10. where=(my_table.c.status == 2)
  11. )
  12. conn.execute(on_update_stmt)

ON CONFLICT may also be used to skip inserting a row entirelyif any conflict with a unique or exclusion constraint occurs; belowthis is illustrated using theon_conflict_do_nothing() method:

  1. from sqlalchemy.dialects.postgresql import insert
  2.  
  3. stmt = insert(my_table).values(id='some_id', data='inserted value')
  4. stmt = stmt.on_conflict_do_nothing(index_elements=['id'])
  5. conn.execute(stmt)

If DO NOTHING is used without specifying any columns or constraint,it has the effect of skipping the INSERT for any unique or exclusionconstraint violation which occurs:

  1. from sqlalchemy.dialects.postgresql import insert
  2.  
  3. stmt = insert(my_table).values(id='some_id', data='inserted value')
  4. stmt = stmt.on_conflict_do_nothing()
  5. conn.execute(stmt)

New in version 1.1: Added support for PostgreSQL ON CONFLICT clauses

See also

INSERT .. ON CONFLICT- in the PostgreSQL documentation.

SQLAlchemy makes available the PostgreSQL @@ operator via theColumnElement.match() method on any textual column expression.On a PostgreSQL dialect, an expression like the following:

  1. select([sometable.c.text.match("search string")])

will emit to the database:

  1. SELECT text @@ to_tsquery('search string') FROM table

The PostgreSQL text search functions such as to_tsquery()and to_tsvector() are availableexplicitly using the standard func construct. For example:

  1. select([
  2. func.to_tsvector('fat cats ate rats').match('cat & rat')
  3. ])

Emits the equivalent of:

  1. SELECT to_tsvector('fat cats ate rats') @@ to_tsquery('cat & rat')

The postgresql.TSVECTOR type can provide for explicit CAST:

  1. from sqlalchemy.dialects.postgresql import TSVECTOR
  2. from sqlalchemy import select, cast
  3. select([cast("some text", TSVECTOR)])

produces a statement equivalent to:

  1. SELECT CAST('some text' AS TSVECTOR) AS anon_1

Full Text Searches in PostgreSQL are influenced by a combination of: thePostgreSQL setting of default_text_search_config, the regconfig usedto build the GIN/GiST indexes, and the regconfig optionally passed induring a query.

When performing a Full Text Search against a column that has a GIN orGiST index that is already pre-computed (which is common on full textsearches) one may need to explicitly pass in a particular PostgreSQLregconfig value to ensure the query-planner utilizes the index and doesnot re-compute the column on demand.

In order to provide for this explicit query planning, or to use differentsearch strategies, the match method accepts a postgresql_regconfigkeyword argument:

  1. select([mytable.c.id]).where(
  2. mytable.c.title.match('somestring', postgresql_regconfig='english')
  3. )

Emits the equivalent of:

  1. SELECT mytable.id FROM mytable
  2. WHERE mytable.title @@ to_tsquery('english', 'somestring')

One can also specifically pass in a ‘regconfig’ value to theto_tsvector() command as the initial argument:

  1. select([mytable.c.id]).where(
  2. func.to_tsvector('english', mytable.c.title )\
  3. .match('somestring', postgresql_regconfig='english')
  4. )

produces a statement equivalent to:

  1. SELECT mytable.id FROM mytable
  2. WHERE to_tsvector('english', mytable.title) @@
  3. to_tsquery('english', 'somestring')

It is recommended that you use the EXPLAIN ANALYZE… tool fromPostgreSQL to ensure that you are generating queries with SQLAlchemy thattake full advantage of any indexes you may have created for full text search.

FROM ONLY …

The dialect supports PostgreSQL’s ONLY keyword for targeting only a particulartable in an inheritance hierarchy. This can be used to produce theSELECT … FROM ONLY, UPDATE ONLY …, and DELETE FROM ONLY …syntaxes. It uses SQLAlchemy’s hints mechanism:

  1. # SELECT ... FROM ONLY ...
  2. result = table.select().with_hint(table, 'ONLY', 'postgresql')
  3. print result.fetchall()
  4.  
  5. # UPDATE ONLY ...
  6. table.update(values=dict(foo='bar')).with_hint('ONLY',
  7. dialect_name='postgresql')
  8.  
  9. # DELETE FROM ONLY ...
  10. table.delete().with_hint('ONLY', dialect_name='postgresql')

PostgreSQL-Specific Index Options

Several extensions to the Index construct are available, specificto the PostgreSQL dialect.

Partial Indexes

Partial indexes add criterion to the index definition so that the index isapplied to a subset of rows. These can be specified on Indexusing the postgresql_where keyword argument:

  1. Index('my_index', my_table.c.id, postgresql_where=my_table.c.value > 10)

Operator Classes

PostgreSQL allows the specification of an operator class for each column ofan index (seehttp://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html).The Index construct allows these to be specified via thepostgresql_ops keyword argument:

  1. Index(
  2. 'my_index', my_table.c.id, my_table.c.data,
  3. postgresql_ops={
  4. 'data': 'text_pattern_ops',
  5. 'id': 'int4_ops'
  6. })

Note that the keys in the postgresql_ops dictionary are the “key” name ofthe Column, i.e. the name used to access it from the .ccollection of Table, which can be configured to be different thanthe actual name of the column as expressed in the database.

If postgresql_ops is to be used against a complex SQL expression suchas a function call, then to apply to the column it must be given a labelthat is identified in the dictionary by name, e.g.:

  1. Index(
  2. 'my_index', my_table.c.id,
  3. func.lower(my_table.c.data).label('data_lower'),
  4. postgresql_ops={
  5. 'data_lower': 'text_pattern_ops',
  6. 'id': 'int4_ops'
  7. })

Index Types

PostgreSQL provides several index types: B-Tree, Hash, GiST, and GIN, as wellas the ability for users to create their own (seehttp://www.postgresql.org/docs/8.3/static/indexes-types.html). These can bespecified on Index using the postgresql_using keyword argument:

  1. Index('my_index', my_table.c.data, postgresql_using='gin')

The value passed to the keyword argument will be simply passed through to theunderlying CREATE INDEX command, so it must be a valid index type for yourversion of PostgreSQL.

Index Storage Parameters

PostgreSQL allows storage parameters to be set on indexes. The storageparameters available depend on the index method used by the index. Storageparameters can be specified on Index using the postgresql_withkeyword argument:

  1. Index('my_index', my_table.c.data, postgresql_with={"fillfactor": 50})

New in version 1.0.6.

PostgreSQL allows to define the tablespace in which to create the index.The tablespace can be specified on Index using thepostgresql_tablespace keyword argument:

  1. Index('my_index', my_table.c.data, postgresql_tablespace='my_tablespace')

New in version 1.1.

Note that the same option is available on Table as well.

Indexes with CONCURRENTLY

The PostgreSQL index option CONCURRENTLY is supported by passing theflag postgresql_concurrently to the Index construct:

  1. tbl = Table('testtbl', m, Column('data', Integer))
  2.  
  3. idx1 = Index('test_idx1', tbl.c.data, postgresql_concurrently=True)

The above index construct will render DDL for CREATE INDEX, assumingPostgreSQL 8.2 or higher is detected or for a connection-less dialect, as:

  1. CREATE INDEX CONCURRENTLY test_idx1 ON testtbl (data)

For DROP INDEX, assuming PostgreSQL 9.2 or higher is detected or fora connection-less dialect, it will emit:

  1. DROP INDEX CONCURRENTLY test_idx1

New in version 1.1: support for CONCURRENTLY on DROP INDEX. TheCONCURRENTLY keyword is now only emitted if a high enough versionof PostgreSQL is detected on the connection (or for a connection-lessdialect).

When using CONCURRENTLY, the PostgreSQL database requires that the statementbe invoked outside of a transaction block. The Python DBAPI enforces thateven for a single statement, a transaction is present, so to use thisconstruct, the DBAPI’s “autocommit” mode must be used:

  1. metadata = MetaData()
  2. table = Table(
  3. "foo", metadata,
  4. Column("id", String))
  5. index = Index(
  6. "foo_idx", table.c.id, postgresql_concurrently=True)
  7.  
  8. with engine.connect() as conn:
  9. with conn.execution_options(isolation_level='AUTOCOMMIT'):
  10. table.create(conn)

See also

Transaction Isolation Level

PostgreSQL Index Reflection

The PostgreSQL database creates a UNIQUE INDEX implicitly whenever theUNIQUE CONSTRAINT construct is used. When inspecting a table usingInspector, the Inspector.get_indexes()and the Inspector.get_unique_constraints() will report on thesetwo constructs distinctly; in the case of the index, the keyduplicates_constraint will be present in the index entry if it isdetected as mirroring a constraint. When performing reflection usingTable(…, autoload=True), the UNIQUE INDEX is not returnedin Table.indexes when it is detected as mirroring aUniqueConstraint in the Table.constraints collection.

Changed in version 1.0.0: - Table reflection now includesUniqueConstraint objects present in the Table.constraintscollection; the PostgreSQL backend will no longer include a “mirrored”Index construct in Table.indexes if it is detectedas corresponding to a unique constraint.

Special Reflection Options

The Inspector used for the PostgreSQL backend is an instanceof PGInspector, which offers additional methods:

  1. from sqlalchemy import create_engine, inspect
  2.  
  3. engine = create_engine("postgresql+psycopg2://localhost/test")
  4. insp = inspect(engine) # will be a PGInspector
  5.  
  6. print(insp.get_enums())

Each member is a dictionary containing these fields:

  • name - name of the enum

  • schema - the schema name for the enum.

  • visible - boolean, whether or not this enum is visiblein the default search path.

  • labels - a list of string labels that apply to the enum.

  1. - Parameters
  2. -

schema – schema name. If None, the default schema(typically ‘public’) is used. May also be set to ‘*’ toindicate load enums for all schemas.

New in version 1.0.0.

  • getforeign_table_names(_schema=None)
  • Return a list of FOREIGN TABLE names.

Behavior is similar to that of Inspector.get_table_names(),except that the list is limited to those tables that report arelkind value of f.

New in version 1.0.0.

  • gettable_oid(_table_name, schema=None)
  • Return the OID for the given table name.

  • getview_names(_schema=None, include=('plain', 'materialized'))

  • Return all view names in schema.

    • Parameters
      • schema – Optional, retrieve names from a non-default schema.For special quoting, use quoted_name.

      • include

specify which types of views to return. Passedas a string value (for a single type) or a tuple (for any numberof types). Defaults to ('plain', 'materialized').

New in version 1.1.

PostgreSQL Table Options

Several options for CREATE TABLE are supported directly by the PostgreSQLdialect in conjunction with the Table construct:

  • TABLESPACE:
  1. Table("some_table", metadata, ..., postgresql_tablespace='some_tablespace')

The above option is also available on the Index construct.

  • ON COMMIT:
  1. Table("some_table", metadata, ..., postgresql_on_commit='PRESERVE ROWS')
  • WITH OIDS:
  1. Table("some_table", metadata, ..., postgresql_with_oids=True)
  • WITHOUT OIDS:
  1. Table("some_table", metadata, ..., postgresql_with_oids=False)
  • INHERITS:
  1. Table("some_table", metadata, ..., postgresql_inherits="some_supertable")
  2.  
  3. Table("some_table", metadata, ..., postgresql_inherits=("t1", "t2", ...))
  4.  
  5. .. versionadded:: 1.0.0
  • PARTITION BY:
  1. Table("some_table", metadata, ...,
  2. postgresql_partition_by='LIST (part_column)')
  3.  
  4. .. versionadded:: 1.2.6

See also

PostgreSQL CREATE TABLE options

ARRAY Types

The PostgreSQL dialect supports arrays, both as multidimensional column typesas well as array literals:

JSON Types

The PostgreSQL dialect supports both JSON and JSONB datatypes, includingpsycopg2’s native support and support for all of PostgreSQL’s specialoperators:

HSTORE Type

The PostgreSQL HSTORE type as well as hstore literals are supported:

ENUM Types

PostgreSQL has an independently creatable TYPE structure which is usedto implement an enumerated type. This approach introduces significantcomplexity on the SQLAlchemy side in terms of when this type should beCREATED and DROPPED. The type object is also an independently reflectableentity. The following sections should be consulted:

Using ENUM with ARRAY

The combination of ENUM and ARRAY is not directly supported by backendDBAPIs at this time. In order to send and receive an ARRAY of ENUM,use the following workaround type, which decorates thepostgresql.ARRAY datatype.

  1. from sqlalchemy import TypeDecorator
  2. from sqlalchemy.dialects.postgresql import ARRAY
  3.  
  4. class ArrayOfEnum(TypeDecorator):
  5. impl = ARRAY
  6.  
  7. def bind_expression(self, bindvalue):
  8. return sa.cast(bindvalue, self)
  9.  
  10. def result_processor(self, dialect, coltype):
  11. super_rp = super(ArrayOfEnum, self).result_processor(
  12. dialect, coltype)
  13.  
  14. def handle_raw_string(value):
  15. inner = re.match(r"^{(.*)}$", value).group(1)
  16. return inner.split(",") if inner else []
  17.  
  18. def process(value):
  19. if value is None:
  20. return None
  21. return super_rp(handle_raw_string(value))
  22. return process

E.g.:

  1. Table(
  2. 'mydata', metadata,
  3. Column('id', Integer, primary_key=True),
  4. Column('data', ArrayOfEnum(ENUM('a', 'b, 'c', name='myenum')))
  5.  
  6. )

This type is not included as a built-in type as it would be incompatiblewith a DBAPI that suddenly decides to support ARRAY of ENUM directly ina new version.

Using JSON/JSONB with ARRAY

Similar to using ENUM, for an ARRAY of JSON/JSONB we need to render theappropriate CAST, however current psycopg2 drivers seem to handle the resultfor ARRAY of JSON automatically, so the type is simpler:

  1. class CastingArray(ARRAY):
  2. def bind_expression(self, bindvalue):
  3. return sa.cast(bindvalue, self)

E.g.:

  1. Table(
  2. 'mydata', metadata,
  3. Column('id', Integer, primary_key=True),
  4. Column('data', CastingArray(JSONB))
  5. )

PostgreSQL Data Types

As with all SQLAlchemy dialects, all UPPERCASE types that are known to bevalid with PostgreSQL are importable from the top level dialect, whetherthey originate from sqlalchemy.types or from the local dialect:

  1. from sqlalchemy.dialects.postgresql import \
  2. ARRAY, BIGINT, BIT, BOOLEAN, BYTEA, CHAR, CIDR, DATE, \
  3. DOUBLE_PRECISION, ENUM, FLOAT, HSTORE, INET, INTEGER, \
  4. INTERVAL, JSON, JSONB, MACADDR, MONEY, NUMERIC, OID, REAL, SMALLINT, TEXT, \
  5. TIME, TIMESTAMP, UUID, VARCHAR, INT4RANGE, INT8RANGE, NUMRANGE, \
  6. DATERANGE, TSRANGE, TSTZRANGE, TSVECTOR

Types which are specific to PostgreSQL, or have PostgreSQL-specificconstruction arguments, are as follows:

Represent a PostgreSQL aggregate order by expression.

E.g.:

  1. from sqlalchemy.dialects.postgresql import aggregate_order_by
  2. expr = func.array_agg(aggregate_order_by(table.c.a, table.c.b.desc()))
  3. stmt = select([expr])

would represent the expression:

  1. SELECT array_agg(a ORDER BY b DESC) FROM table;

Similarly:

  1. expr = func.string_agg(
  2. table.c.a,
  3. aggregate_order_by(literal_column("','"), table.c.a)
  4. )
  5. stmt = select([expr])

Would represent:

  1. SELECT string_agg(a, ',' ORDER BY a) FROM table;

New in version 1.1.

Changed in version 1.2.13: - the ORDER BY argument may be multiple terms

See also

array_agg

A PostgreSQL ARRAY literal.

This is used to produce ARRAY literals in SQL expressions, e.g.:

  1. from sqlalchemy.dialects.postgresql import array
  2. from sqlalchemy.dialects import postgresql
  3. from sqlalchemy import select, func
  4.  
  5. stmt = select([
  6. array([1,2]) + array([3,4,5])
  7. ])
  8.  
  9. print(stmt.compile(dialect=postgresql.dialect()))

Produces the SQL:

  1. SELECT ARRAY[%(param_1)s, %(param_2)s] ||
  2. ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1

An instance of array will always have the datatypeARRAY. The “inner” type of the array is inferred fromthe values present, unless the type_ keyword argument is passed:

  1. array(['foo', 'bar'], type_=CHAR)

Multidimensional arrays are produced by nesting array constructs.The dimensionality of the final ARRAY type is calculated byrecursively adding the dimensions of the inner ARRAY type:

  1. stmt = select([
  2. array([
  3. array([1, 2]), array([3, 4]), array([column('q'), column('x')])
  4. ])
  5. ])
  6. print(stmt.compile(dialect=postgresql.dialect()))

Produces:

  1. SELECT ARRAY[ARRAY[%(param_1)s, %(param_2)s],
  2. ARRAY[%(param_3)s, %(param_4)s], ARRAY[q, x]] AS anon_1

New in version 1.3.6: added support for multidimensional array literals

See also

postgresql.ARRAY

  • class sqlalchemy.dialects.postgresql.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)
  • Bases: sqlalchemy.types.ARRAY

PostgreSQL ARRAY type.

Changed in version 1.1: The postgresql.ARRAY type is nowa subclass of the core types.ARRAY type.

The postgresql.ARRAY type is constructed in the same wayas the core types.ARRAY type; a member type is required, and anumber of dimensions is recommended if the type is to be used for morethan one dimension:

  1. from sqlalchemy.dialects import postgresql
  2.  
  3. mytable = Table("mytable", metadata,
  4. Column("data", postgresql.ARRAY(Integer, dimensions=2))
  5. )

The postgresql.ARRAY type provides all operations defined on thecore types.ARRAY type, including support for “dimensions”,indexed access, and simple matching such astypes.ARRAY.Comparator.any() andtypes.ARRAY.Comparator.all(). postgresql.ARRAY class alsoprovides PostgreSQL-specific methods for containment operations, includingpostgresql.ARRAY.Comparator.contains()postgresql.ARRAY.Comparator.contained_by(), andpostgresql.ARRAY.Comparator.overlap(), e.g.:

  1. mytable.c.data.contains([1, 2])

The postgresql.ARRAY type may not be supported on allPostgreSQL DBAPIs; it is currently known to work on psycopg2 only.

Additionally, the postgresql.ARRAY type does not work directly inconjunction with the ENUM type. For a workaround, see thespecial type at Using ENUM with ARRAY.

See also

types.ARRAY - base array type

postgresql.array - produces a literal array value.

  • class Comparator(expr)
  • Bases: sqlalchemy.types.Comparator

Define comparison operations for ARRAY.

Note that these operations are in addition to those providedby the base types.ARRAY.Comparator class, includingtypes.ARRAY.Comparator.any() andtypes.ARRAY.Comparator.all().

  1. - <code>contained_by</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ARRAY.Comparator.contained_by)
  2. -

Boolean expression. Test if elements are a proper subset of theelements of the argument array expression.

  1. - <code>contains</code>(_other_, _**kwargs_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ARRAY.Comparator.contains)
  2. -

Boolean expression. Test if elements are a superset of theelements of the argument array expression.

  1. - <code>overlap</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ARRAY.Comparator.overlap)
  2. -

Boolean expression. Test if array has elements in common withan argument array expression.

  • init(item_type, as_tuple=False, dimensions=None, zero_indexes=False)
  • Construct an ARRAY.

E.g.:

  1. Column('myarray', ARRAY(Integer))

Arguments are:

  1. - Parameters
  2. -
  3. -

item_type – The data type of items of this array. Note thatdimensionality is irrelevant here, so multi-dimensional arrays likeINTEGER[][], are constructed as ARRAY(Integer), not asARRAY(ARRAY(Integer)) or such.

  1. -

as_tuple=False – Specify whether return resultsshould be converted to tuples from lists. DBAPIs suchas psycopg2 return lists by default. When tuples arereturned, the results are hashable.

  1. -

dimensions – if non-None, the ARRAY will assume a fixednumber of dimensions. This will cause the DDL emitted for thisARRAY to include the exact number of bracket clauses [],and will also optimize the performance of the type overall.Note that PG arrays are always implicitly “non-dimensioned”,meaning they can store any number of dimensions no matter howthey were declared.

  1. -

zero_indexes=False

when True, index values will be convertedbetween Python zero-based and PostgreSQL one-based indexes, e.g.a value of one will be added to all index values before passingto the database.

New in version 0.9.5.

  • sqlalchemy.dialects.postgresql.arrayagg(arg, *kw_)
  • PostgreSQL-specific form of array_agg, ensuresreturn type is postgresql.ARRAY and notthe plain types.ARRAY, unless an explicit type_is passed.

New in version 1.1.

  • sqlalchemy.dialects.postgresql.Any(other, arrexpr, operator=)
  • A synonym for the ARRAY.Comparator.any() method.

This method is legacy and is here for backwards-compatibility.

See also

expression.any_()

  • sqlalchemy.dialects.postgresql.All(other, arrexpr, operator=)
  • A synonym for the ARRAY.Comparator.all() method.

This method is legacy and is here for backwards-compatibility.

See also

expression.all_()

inherited from theinit()method ofLargeBinary

Construct a LargeBinary type.

  1. - Parameters
  2. -

length – optional, a length for the column for use inDDL statements, for those binary types that accept a length,such as the MySQL BLOB type.

  • class sqlalchemy.dialects.postgresql.CIDR
  • Bases: sqlalchemy.types.TypeEngine

  • class sqlalchemy.dialects.postgresql.DOUBLEPRECISION(_precision=None, asdecimal=False, decimal_return_scale=None)

  • Bases: sqlalchemy.types.Float

    • init(precision=None, asdecimal=False, decimal_return_scale=None)

inherited from theinit()method ofFloat

Construct a Float.

  1. - Parameters
  2. -
  3. -

precision – the numeric precision for use in DDL CREATETABLE.

  1. -

asdecimal – the same flag as that of Numeric, butdefaults to False. Note that setting this flag to Trueresults in floating point conversion.

  1. -

decimal_return_scale

Default scale to use when convertingfrom floats to Python decimals. Floating point values will typicallybe much longer due to decimal inaccuracy, and most floating pointdatabase types don’t have a notion of “scale”, so by default thefloat type looks for the first ten decimal places when converting.Specifying this value will override that length. Note that theMySQL float types, which do include “scale”, will use “scale”as the default for decimal_return_scale, if not otherwise specified.

New in version 0.9.0.

  • class sqlalchemy.dialects.postgresql.ENUM(*enums, **kw)
  • Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types.Enum

PostgreSQL ENUM type.

This is a subclass of types.Enum which includessupport for PG’s CREATE TYPE and DROP TYPE.

When the builtin type types.Enum is used and theEnum.native_enum flag is left at its default ofTrue, the PostgreSQL backend will use a postgresql.ENUMtype as the implementation, so the special create/drop ruleswill be used.

The create/drop behavior of ENUM is necessarily intricate, due to theawkward relationship the ENUM type has in relationship to theparent table, in that it may be “owned” by just a single table, ormay be shared among many tables.

When using types.Enum or postgresql.ENUMin an “inline” fashion, the CREATE TYPE and DROP TYPE is emittedcorresponding to when the Table.create() and Table.drop()methods are called:

  1. table = Table('sometable', metadata,
  2. Column('some_enum', ENUM('a', 'b', 'c', name='myenum'))
  3. )
  4.  
  5. table.create(engine) # will emit CREATE ENUM and CREATE TABLE
  6. table.drop(engine) # will emit DROP TABLE and DROP ENUM

To use a common enumerated type between multiple tables, the bestpractice is to declare the types.Enum orpostgresql.ENUM independently, and associate it with theMetaData object itself:

  1. my_enum = ENUM('a', 'b', 'c', name='myenum', metadata=metadata)
  2.  
  3. t1 = Table('sometable_one', metadata,
  4. Column('some_enum', myenum)
  5. )
  6.  
  7. t2 = Table('sometable_two', metadata,
  8. Column('some_enum', myenum)
  9. )

When this pattern is used, care must still be taken at the levelof individual table creates. Emitting CREATE TABLE without alsospecifying checkfirst=True will still cause issues:

  1. t1.create(engine) # will fail: no such type 'myenum'

If we specify checkfirst=True, the individual table-level createoperation will check for the ENUM and create if not exists:

  1. # will check if enum exists, and emit CREATE TYPE if not
  2. t1.create(engine, checkfirst=True)

When using a metadata-level ENUM type, the type will always be createdand dropped if either the metadata-wide create/drop is called:

  1. metadata.create_all(engine) # will emit CREATE TYPE
  2. metadata.drop_all(engine) # will emit DROP TYPE

The type can also be created and dropped directly:

  1. my_enum.create(engine)
  2. my_enum.drop(engine)

Changed in version 1.0.0: The PostgreSQL postgresql.ENUM typenow behaves more strictly with regards to CREATE/DROP. A metadata-levelENUM type will only be created and dropped at the metadata level,not the table level, with the exception oftable.create(checkfirst=True).The table.drop() call will now emit a DROP TYPE for a table-levelenumerated type.

  • init(*enums, **kw)
  • Construct an ENUM.

Arguments are the same as that oftypes.Enum, but also includingthe following parameters.

  1. - Parameters
  2. -

create_type – Defaults to True.Indicates that CREATE TYPE should beemitted, after optionally checking for thepresence of the type, when the parenttable is being created; and additionallythat DROP TYPE is called when the tableis dropped. When False, no checkwill be performed and no CREATE TYPEor DROP TYPE is emitted, unlesscreate()or drop()are called directly.Setting to False is helpfulwhen invoking a creation scheme to a SQL filewithout access to the actual database -the create() anddrop() methods canbe used to emit SQL to a target bind.

  • create(bind=None, checkfirst=True)
  • Emit CREATE TYPE for thisENUM.

If the underlying dialect does not supportPostgreSQL CREATE TYPE, no action is taken.

  1. - Parameters
  2. -
  3. -

bind – a connectable Engine,Connection, or similar object to emitSQL.

  1. -

checkfirst – if True, a query againstthe PG catalog will be first performed to seeif the type does not exist already beforecreating.

  • drop(bind=None, checkfirst=True)
  • Emit DROP TYPE for thisENUM.

If the underlying dialect does not supportPostgreSQL DROP TYPE, no action is taken.

  1. - Parameters
  2. -
  3. -

bind – a connectable Engine,Connection, or similar object to emitSQL.

  1. -

checkfirst – if True, a query againstthe PG catalog will be first performed to seeif the type actually exists before dropping.

Represent the PostgreSQL HSTORE type.

The HSTORE type stores dictionaries containing strings, e.g.:

  1. data_table = Table('data_table', metadata,
  2. Column('id', Integer, primary_key=True),
  3. Column('data', HSTORE)
  4. )
  5.  
  6. with engine.connect() as conn:
  7. conn.execute(
  8. data_table.insert(),
  9. data = {"key1": "value1", "key2": "value2"}
  10. )

HSTORE provides for a wide range of operations, including:

  • Index operations:
  1. data_table.c.data['some key'] == 'some value'
  • Containment operations:
  1. data_table.c.data.has_key('some key')
  2.  
  3. data_table.c.data.has_all(['one', 'two', 'three'])
  • Concatenation:
  1. data_table.c.data + {"k1": "v1"}

For a full list of special methods seeHSTORE.comparator_factory.

For usage with the SQLAlchemy ORM, it may be desirable to combinethe usage of HSTORE with MutableDict dictionarynow part of the sqlalchemy.ext.mutableextension. This extension will allow “in-place” changes to thedictionary, e.g. addition of new keys or replacement/removal of existingkeys to/from the current dictionary, to produce events which will bedetected by the unit of work:

  1. from sqlalchemy.ext.mutable import MutableDict
  2.  
  3. class MyClass(Base):
  4. __tablename__ = 'data_table'
  5.  
  6. id = Column(Integer, primary_key=True)
  7. data = Column(MutableDict.as_mutable(HSTORE))
  8.  
  9. my_object = session.query(MyClass).one()
  10.  
  11. # in-place mutation, requires Mutable extension
  12. # in order for the ORM to detect
  13. my_object.data['some_key'] = 'some value'
  14.  
  15. session.commit()

When the sqlalchemy.ext.mutable extension is not used, the ORMwill not be alerted to any changes to the contents of an existingdictionary, unless that dictionary value is re-assigned to theHSTORE-attribute itself, thus generating a change event.

See also

hstore - render the PostgreSQL hstore() function.

  • class Comparator(expr)
  • Bases: sqlalchemy.types.Comparator, sqlalchemy.types.Comparator

Define comparison operations for HSTORE.

  1. - <code>array</code>()[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.array)
  2. -

Text array expression. Returns array of alternating keys andvalues.

  1. - <code>contained_by</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.contained_by)
  2. -

Boolean expression. Test if keys are a proper subset of thekeys of the argument jsonb expression.

  1. - <code>contains</code>(_other_, _**kwargs_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.contains)
  2. -

Boolean expression. Test if keys (or array) are a supersetof/contained the keys of the argument jsonb expression.

  1. - <code>defined</code>(_key_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.defined)
  2. -

Boolean expression. Test for presence of a non-NULL value forthe key. Note that the key may be a SQLA expression.

  1. - <code>delete</code>(_key_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.delete)
  2. -

HStore expression. Returns the contents of this hstore with thegiven key deleted. Note that the key may be a SQLA expression.

  1. - <code>has_all</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.has_all)
  2. -

Boolean expression. Test for presence of all keys in jsonb

  1. - <code>has_any</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.has_any)
  2. -

Boolean expression. Test for presence of any key in jsonb

  1. - <code>has_key</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.has_key)
  2. -

Boolean expression. Test for presence of a key. Note that thekey may be a SQLA expression.

  1. - <code>keys</code>()[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.keys)
  2. -

Text array expression. Returns array of keys.

  1. - <code>matrix</code>()[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.matrix)
  2. -

Text array expression. Returns array of [key, value] pairs.

  1. - <code>slice</code>(_array_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.slice)
  2. -

HStore expression. Returns a subset of an hstore defined byarray of keys.

  1. - <code>vals</code>()[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.HSTORE.Comparator.vals)
  2. -

Text array expression. Returns array of values.

  • init(text_type=None)
  • Construct a new HSTORE.

    • Parameters
    • text_type

the type that should be used for indexed values.Defaults to types.Text.

New in version 1.1.0.

  • bindprocessor(_dialect)
  • Return a conversion function for processing bind values.

Returns a callable which will receive a bind parameter valueas the sole positional argument and will return a value tosend to the DB-API.

If processing is not necessary, the method should return None.

  1. - Parameters
  2. -

dialect – Dialect instance in use.

  • comparator_factory
  • alias of HSTORE.Comparator

  • resultprocessor(_dialect, coltype)

  • Return a conversion function for processing result row values.

Returns a callable which will receive a result row columnvalue as the sole positional argument and will return a valueto return to the user.

If processing is not necessary, the method should return None.

  1. - Parameters
  2. -
  3. -

dialect – Dialect instance in use.

  1. -

coltype – DBAPI coltype argument received in cursor.description.

Construct an hstore value within a SQL expression using thePostgreSQL hstore() function.

The hstore function accepts one or two arguments as describedin the PostgreSQL documentation.

E.g.:

  1. from sqlalchemy.dialects.postgresql import array, hstore
  2.  
  3. select([hstore('key1', 'value1')])
  4.  
  5. select([
  6. hstore(
  7. array(['key1', 'key2', 'key3']),
  8. array(['value1', 'value2', 'value3'])
  9. )
  10. ])

See also

HSTORE - the PostgreSQL HSTORE datatype.

  • class sqlalchemy.dialects.postgresql.INET
  • Bases: sqlalchemy.types.TypeEngine

  • class sqlalchemy.dialects.postgresql.INTERVAL(precision=None, fields=None)

  • Bases: sqlalchemy.types.NativeForEmulated, sqlalchemy.types._AbstractInterval

PostgreSQL INTERVAL type.

The INTERVAL type may not be supported on all DBAPIs.It is known to work on psycopg2 and not pg8000 or zxjdbc.

  • init(precision=None, fields=None)
  • Construct an INTERVAL.

    • Parameters
      • precision – optional integer precision value

      • fields

string fields specifier. allows storage of fieldsto be limited, such as "YEAR", "MONTH", "DAY TO HOUR",etc.

New in version 1.2.

  • class sqlalchemy.dialects.postgresql.JSON(none_as_null=False, astext_type=None)
  • Bases: sqlalchemy.types.JSON

Represent the PostgreSQL JSON type.

This type is a specialization of the Core-level types.JSONtype. Be sure to read the documentation for types.JSON forimportant tips regarding treatment of NULL values and ORM use.

Changed in version 1.1: postgresql.JSON is now a PostgreSQL-specific specialization of the new types.JSON type.

The operators provided by the PostgreSQL version of JSONinclude:

  • Index operations (the -> operator):
  1. data_table.c.data['some key']
  2.  
  3. data_table.c.data[5]
  • Index operations returning text (the ->> operator):
  1. data_table.c.data['some key'].astext == 'some value'
  • Index operations with CAST(equivalent to CAST(col ->> ['some key'] AS <type>)):
  1. data_table.c.data['some key'].astext.cast(Integer) == 5
  • Path index operations (the #> operator):
  1. data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')]
  • Path index operations returning text (the #>> operator):
  1. data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')].astext == 'some value'

Changed in version 1.1: The ColumnElement.cast() operator onJSON objects now requires that the JSON.Comparator.astextmodifier be called explicitly, if the cast works only from a textualstring.

Index operations return an expression object whose type defaults toJSON by default, so that further JSON-oriented instructionsmay be called upon the result type.

Custom serializers and deserializers are specified at the dialect level,that is using create_engine(). The reason for this is that whenusing psycopg2, the DBAPI only allows serializers at the per-cursoror per-connection level. E.g.:

  1. engine = create_engine("postgresql://scott:tiger@localhost/test",
  2. json_serializer=my_serialize_fn,
  3. json_deserializer=my_deserialize_fn
  4. )

When using the psycopg2 dialect, the json_deserializer is registeredagainst the database using psycopg2.extras.register_default_json.

See also

types.JSON - Core level JSON type

JSONB

  • class Comparator(expr)
  • Bases: sqlalchemy.types.Comparator

Define comparison operations for JSON.

  1. - _property _<code>astext</code>[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.JSON.Comparator.astext)
  2. -

On an indexed expression, use the “astext” (e.g. “->>”)conversion when rendered in SQL.

E.g.:

  1. select([data_table.c.data['some key'].astext])

See also

ColumnElement.cast()

  • init(none_as_null=False, astext_type=None)
  • Construct a JSON type.

    • Parameters
      • none_as_null

if True, persist the value None as aSQL NULL value, not the JSON encoding of null. Note thatwhen this flag is False, the null() construct can stillbe used to persist a NULL value:

  1. from sqlalchemy import null
  2. conn.execute(table.insert(), data=null())

Changed in version 0.9.8: - Added none_as_null, and null()is now supported in order to persist a NULL value.

See also

JSON.NULL

  1. -

astext_type

the type to use for theJSON.Comparator.astextaccessor on indexed attributes. Defaults to types.Text.

New in version 1.1.

  • class sqlalchemy.dialects.postgresql.JSONB(none_as_null=False, astext_type=None)
  • Bases: sqlalchemy.dialects.postgresql.json.JSON

Represent the PostgreSQL JSONB type.

The JSONB type stores arbitrary JSONB format data, e.g.:

  1. data_table = Table('data_table', metadata,
  2. Column('id', Integer, primary_key=True),
  3. Column('data', JSONB)
  4. )
  5.  
  6. with engine.connect() as conn:
  7. conn.execute(
  8. data_table.insert(),
  9. data = {"key1": "value1", "key2": "value2"}
  10. )

The JSONB type includes all operations provided byJSON, including the same behaviors for indexing operations.It also adds additional operators specific to JSONB, includingJSONB.Comparator.has_key(), JSONB.Comparator.has_all(),JSONB.Comparator.has_any(), JSONB.Comparator.contains(),and JSONB.Comparator.contained_by().

Like the JSON type, the JSONB type does not detectin-place changes when used with the ORM, unless thesqlalchemy.ext.mutable extension is used.

Custom serializers and deserializersare shared with the JSON class, using the json_serializerand json_deserializer keyword arguments. These must be specifiedat the dialect level using create_engine(). When usingpsycopg2, the serializers are associated with the jsonb type usingpsycopg2.extras.register_default_jsonb on a per-connection basis,in the same way that psycopg2.extras.register_default_json is usedto register these handlers with the json type.

New in version 0.9.7.

See also

JSON

  • class Comparator(expr)
  • Bases: sqlalchemy.dialects.postgresql.json.Comparator

Define comparison operations for JSON.

  1. - <code>contained_by</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.JSONB.Comparator.contained_by)
  2. -

Boolean expression. Test if keys are a proper subset of thekeys of the argument jsonb expression.

  1. - <code>contains</code>(_other_, _**kwargs_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.JSONB.Comparator.contains)
  2. -

Boolean expression. Test if keys (or array) are a supersetof/contained the keys of the argument jsonb expression.

  1. - <code>has_all</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.JSONB.Comparator.has_all)
  2. -

Boolean expression. Test for presence of all keys in jsonb

  1. - <code>has_any</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.JSONB.Comparator.has_any)
  2. -

Boolean expression. Test for presence of any key in jsonb

  1. - <code>has_key</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.JSONB.Comparator.has_key)
  2. -

Boolean expression. Test for presence of a key. Note that thekey may be a SQLA expression.

Provide the PostgreSQL MONEY type.

New in version 1.2.

Provide the PostgreSQL OID type.

New in version 0.9.5.

  • class sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, decimal_return_scale=None)
  • Bases: sqlalchemy.types.Float

The SQL REAL type.

  • init(precision=None, asdecimal=False, decimal_return_scale=None)

inherited from theinit()method ofFloat

Construct a Float.

  1. - Parameters
  2. -
  3. -

precision – the numeric precision for use in DDL CREATETABLE.

  1. -

asdecimal – the same flag as that of Numeric, butdefaults to False. Note that setting this flag to Trueresults in floating point conversion.

  1. -

decimal_return_scale

Default scale to use when convertingfrom floats to Python decimals. Floating point values will typicallybe much longer due to decimal inaccuracy, and most floating pointdatabase types don’t have a notion of “scale”, so by default thefloat type looks for the first ten decimal places when converting.Specifying this value will override that length. Note that theMySQL float types, which do include “scale”, will use “scale”as the default for decimal_return_scale, if not otherwise specified.

New in version 0.9.0.

Provide the PostgreSQL REGCLASS type.

New in version 1.2.7.

The postgresql.TSVECTOR type implements the PostgreSQLtext search type TSVECTOR.

It can be used to do full text queries on natural languagedocuments.

New in version 0.9.0.

See also

Full Text Search

PostgreSQL UUID type.

Represents the UUID column type, interpretingdata either as natively returned by the DBAPIor as Python uuid objects.

The UUID type may not be supported on all DBAPIs.It is known to work on psycopg2 and not pg8000.

  • init(as_uuid=False)
  • Construct a UUID type.

    • Parameters
    • as_uuid=False – if True, values will be interpretedas Python uuid objects, converting to/from string via theDBAPI.

Range Types

The new range column types found in PostgreSQL 9.2 onwards arecatered for by the following types:

Represent the PostgreSQL INT4RANGE type.

Represent the PostgreSQL INT8RANGE type.

Represent the PostgreSQL NUMRANGE type.

Represent the PostgreSQL DATERANGE type.

Represent the PostgreSQL TSRANGE type.

Represent the PostgreSQL TSTZRANGE type.

The types above get most of their functionality from the followingmixin:

  • class sqlalchemy.dialects.postgresql.ranges.RangeOperators
  • This mixin provides functionality for the Range Operatorslisted in Table 9-44 of the postgres documentation for RangeFunctions and Operators. It is used by all the range typesprovided in the postgres dialect and can likely be used forany range types you create yourself.

No extra support is provided for the Range Functions listed inTable 9-45 of the postgres documentation. For these, the normalfunc() object should be used.

  • class comparatorfactory(_expr)
  • Bases: sqlalchemy.types.Comparator

Define comparison operations for range types.

  1. - <code>__ne__</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.__ne__)
  2. -

Boolean expression. Returns true if two ranges are not equal

  1. - <code>adjacent_to</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.adjacent_to)
  2. -

Boolean expression. Returns true if the range in the columnis adjacent to the range in the operand.

  1. - <code>contained_by</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.contained_by)
  2. -

Boolean expression. Returns true if the column is containedwithin the right hand operand.

  1. - <code>contains</code>(_other_, _**kw_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.contains)
  2. -

Boolean expression. Returns true if the right hand operand,which can be an element or a range, is contained within thecolumn.

  1. - <code>not_extend_left_of</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.not_extend_left_of)
  2. -

Boolean expression. Returns true if the range in the columndoes not extend left of the range in the operand.

  1. - <code>not_extend_right_of</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.not_extend_right_of)
  2. -

Boolean expression. Returns true if the range in the columndoes not extend right of the range in the operand.

  1. - <code>overlaps</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.overlaps)
  2. -

Boolean expression. Returns true if the column overlaps(has points in common with) the right hand operand.

  1. - <code>strictly_left_of</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.strictly_left_of)
  2. -

Boolean expression. Returns true if the column is strictlyleft of the right hand operand.

  1. - <code>strictly_right_of</code>(_other_)[](https://docs.sqlalchemy.org/en/13/dialects/#sqlalchemy.dialects.postgresql.ranges.RangeOperators.comparator_factory.strictly_right_of)
  2. -

Boolean expression. Returns true if the column is strictlyright of the right hand operand.

Warning

The range type DDL support should work with any PostgreSQL DBAPIdriver, however the data types returned may vary. If you are usingpsycopg2, it’s recommended to upgrade to version 2.5 or laterbefore using these column types.

When instantiating models that use these column types, you should passwhatever data type is expected by the DBAPI driver you’re using forthe column type. For psycopg2 these arepsycopg2.extras.NumericRange,psycopg2.extras.DateRange,psycopg2.extras.DateTimeRange andpsycopg2.extras.DateTimeTZRange or the class you’veregistered with psycopg2.extras.register_range.

For example:

  1. from psycopg2.extras import DateTimeRange
  2. from sqlalchemy.dialects.postgresql import TSRANGE
  3.  
  4. class RoomBooking(Base):
  5.  
  6. __tablename__ = 'room_booking'
  7.  
  8. room = Column(Integer(), primary_key=True)
  9. during = Column(TSRANGE())
  10.  
  11. booking = RoomBooking(
  12. room=101,
  13. during=DateTimeRange(datetime(2013, 3, 23), None)
  14. )

PostgreSQL Constraint Types

SQLAlchemy supports PostgreSQL EXCLUDE constraints via theExcludeConstraint class:

A table-level EXCLUDE constraint.

Defines an EXCLUDE constraint as described in the postgresdocumentation.

E.g.:

  1. const = ExcludeConstraint(
  2. (Column('period'), '&&'),
  3. (Column('group'), '='),
  4. where=(Column('group') != 'some group')
  5. )

The constraint is normally embedded into the Table constructdirectly, or added later using append_constraint():

  1. some_table = Table(
  2. 'some_table', metadata,
  3. Column('id', Integer, primary_key=True),
  4. Column('period', TSRANGE()),
  5. Column('group', String)
  6. )
  7.  
  8. some_table.append_constraint(
  9. ExcludeConstraint(
  10. (some_table.c.period, '&&'),
  11. (some_table.c.group, '='),
  12. where=some_table.c.group != 'some group',
  13. name='some_table_excl_const'
  14. )
  15. )
  1. - Parameters
  2. -
  3. -

*elements – A sequence of two tuples of the form (column, operator) where“column” is a SQL expression element or a raw SQL string, mosttypically a Column object, and “operator” is a stringcontaining the operator to use. In order to specify a column namewhen a Column object is not available, while ensuringthat any necessary quoting rules take effect, an ad-hocColumn or sql.expression.column() object should beused.

  1. -

name – Optional, the in-database name of this constraint.

  1. -

deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE whenissuing DDL for this constraint.

  1. -

initially – Optional string. If set, emit INITIALLY when issuing DDLfor this constraint.

  1. -

using – Optional string. If set, emit USING when issuing DDLfor this constraint. Defaults to ‘gist’.

  1. -

where

Optional SQL expression construct or literal SQL string.If set, emit WHERE when issuing DDLfor this constraint.

Warning

The ExcludeConstraint.where argument to ExcludeConstraint can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given. DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER.

For example:

  1. from sqlalchemy.dialects.postgresql import ExcludeConstraint, TSRANGE
  2.  
  3. class RoomBooking(Base):
  4.  
  5. __tablename__ = 'room_booking'
  6.  
  7. room = Column(Integer(), primary_key=True)
  8. during = Column(TSRANGE())
  9.  
  10. __table_args__ = (
  11. ExcludeConstraint(('room', '='), ('during', '&&')),
  12. )

PostgreSQL DML Constructs

  • sqlalchemy.dialects.postgresql.dml.insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)
  • Construct a new Insert object.

This constructor is mirrored as a public API function; see insert() for a full usage and argument description.

  • class sqlalchemy.dialects.postgresql.dml.Insert(table, values=None, inline=False, bind=None, prefixes=None, returning=None, return_defaults=False, **dialect_kw)
  • Bases: sqlalchemy.sql.expression.Insert

PostgreSQL-specific implementation of INSERT.

Adds methods for PG-specific syntaxes such as ON CONFLICT.

New in version 1.1.

  • excluded
  • Provide the excluded namespace for an ON CONFLICT statement

PG’s ON CONFLICT clause allows reference to the row that wouldbe inserted, known as excluded. This attribute providesall columns in this row to be referenceable.

See also

INSERT…ON CONFLICT (Upsert) - example of howto use Insert.excluded

  • onconflict_do_nothing(_constraint=None, index_elements=None, index_where=None)
  • Specifies a DO NOTHING action for ON CONFLICT clause.

The constraint and index_elements argumentsare optional, but only one of these can be specified.

  1. - Parameters
  2. -
  3. -

constraint – The name of a unique or exclusion constraint on the table,or the constraint object itself if it has a .name attribute.

  1. -

index_elements – A sequence consisting of string column names, Columnobjects, or other column expression objects that will be usedto infer a target index.

  1. -

index_where

Additional WHERE criterion that can be used to infer aconditional target index.

New in version 1.1.

See also

INSERT…ON CONFLICT (Upsert)

  • onconflict_do_update(_constraint=None, index_elements=None, index_where=None, set=None, _where=None)
  • Specifies a DO UPDATE SET action for ON CONFLICT clause.

Either the constraint or index_elements argument isrequired, but only one of these can be specified.

  1. - Parameters
  2. -
  3. -

constraint – The name of a unique or exclusion constraint on the table,or the constraint object itself if it has a .name attribute.

  1. -

index_elements – A sequence consisting of string column names, Columnobjects, or other column expression objects that will be usedto infer a target index.

  1. -

index_where – Additional WHERE criterion that can be used to infer aconditional target index.

  1. -

set_

Required argument. A dictionary or other mapping objectwith column names as keys and expressions or literals as values,specifying the SET actions to take.If the target Column specifies a “.key” attribute distinctfrom the column name, that key should be used.

Warning

This dictionary does not take into accountPython-specified default UPDATE values or generation functions,e.g. those specified using Column.onupdate.These values will not be exercised for an ON CONFLICT style ofUPDATE, unless they are manually specified in theInsert.onconflict_do_update.set dictionary.

  1. -

where

Optional argument. If present, can be a literal SQLstring or an acceptable expression for a WHERE clausethat restricts the rows affected by DO UPDATE SET. Rowsnot meeting the WHERE condition will not be updated(effectively a DO NOTHING for those rows).

New in version 1.1.

See also

INSERT…ON CONFLICT (Upsert)

psycopg2

Support for the PostgreSQL database via the psycopg2 driver.

DBAPI

Documentation and download information (if applicable) for psycopg2 is available at:http://pypi.python.org/pypi/psycopg2/

Connecting

Connect String:

  1. postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]

psycopg2 Connect Arguments

psycopg2-specific keyword arguments which are accepted bycreate_engine() are:

  • server_side_cursors: Enable the usage of “server side cursors” for SQLstatements which support this feature. What this essentially means from apsycopg2 point of view is that the cursor is created using a name, e.g.connection.cursor('some name'), which has the effect that result rowsare not immediately pre-fetched and buffered after statement execution, butare instead left on the server and only retrieved as needed. SQLAlchemy’sResultProxy uses special row-bufferingbehavior when this feature is enabled, such that groups of 100 rows at atime are fetched over the wire to reduce conversational overhead.Note that the Connection.execution_options.stream_resultsexecution option is a more targetedway of enabling this mode on a per-execution basis.

  • use_native_unicode: Enable the usage of Psycopg2 “native unicode” modeper connection. True by default.

See also

Disabling Native Unicode

  • isolation_level: This option, available for all PostgreSQL dialects,includes the AUTOCOMMIT isolation level when using the psycopg2dialect.

See also

Psycopg2 Transaction Isolation Level

  • client_encoding: sets the client encoding in a libpq-agnostic way,using psycopg2’s set_client_encoding() method.

See also

Unicode with Psycopg2

  • executemany_mode, executemany_batch_page_size,executemany_values_page_size: Allows use of psycopg2extensions for optimizing “executemany”-stye queries. See the referencedsection below for details.

See also

Psycopg2 Fast Execution Helpers

  • use_batch_mode: this is the previous setting used to affect “executemany”mode and is now deprecated.

Unix Domain Connections

psycopg2 supports connecting via Unix domain connections. When the hostportion of the URL is omitted, SQLAlchemy passes None to psycopg2,which specifies Unix-domain communication rather than TCP/IP communication:

  1. create_engine("postgresql+psycopg2://user:password@/dbname")

By default, the socket file used is to connect to a Unix-domain socketin /tmp, or whatever socket directory was specified when PostgreSQLwas built. This value can be overridden by passing a pathname to psycopg2,using host as an additional keyword argument:

  1. create_engine("postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql")

See also

PQconnectdbParams

Empty DSN Connections / Environment Variable Connections

The psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to thelibpq client library, which by default indicates to connect to a localhostPostgreSQL database that is open for “trust” connections. This behavior can befurther tailored using a particular set of environment variables which areprefixed with PG_…, which are consumed by libpq to take the place ofany or all elements of the connection string.

For this form, the URL can be passed without any elements other than theinitial scheme:

  1. engine = create_engine('postgresql+psycopg2://')

In the above form, a blank “dsn” string is passed to the psycopg2.connect()function which in turn represents an empty DSN passed to libpq.

New in version 1.3.2: support for parameter-less connections with psycopg2.

See also

Environment Variables -PostgreSQL documentation on how to use PG_…environment variables for connections.

Per-Statement/Connection Execution Options

The following DBAPI-specific options are respected when used withConnection.execution_options(), Executable.execution_options(),Query.execution_options(), in addition to those not specific to DBAPIs:

  • isolation_level - Set the transaction isolation level for the lifespanof a Connection (can only be set on a connection, not a statementor query). See Psycopg2 Transaction Isolation Level.

  • stream_results - Enable or disable usage of psycopg2 server sidecursors - this feature makes use of “named” cursors in combination withspecial result handling methods so that result rows are not fully buffered.If None or not set, the server_side_cursors option of theEngine is used.

  • max_row_buffer - when using stream_results, an integer value thatspecifies the maximum number of rows to buffer at a time. This isinterpreted by the BufferedRowResultProxy, and if omitted thebuffer will grow to ultimately store 1000 rows at a time.

New in version 1.0.6.

Psycopg2 Fast Execution Helpers

Modern versions of psycopg2 include a feature known asFast Execution Helpers , whichhave been shown in benchmarking to improve psycopg2’s executemany()performance, primarily with INSERT statements, by multiple orders of magnitude.SQLAlchemy allows this extension to be used for all executemany() stylecalls invoked by an Engine when used with multiple parametersets, which includes the use of this feature both by theCore as well as by the ORM for inserts of objects with non-autogeneratedprimary key values, by adding the executemany_mode flag tocreate_engine():

  1. engine = create_engine(
  2. "postgresql+psycopg2://scott:tiger@host/dbname",
  3. executemany_mode='batch')

Changed in version 1.3.7: - the use_batch_mode flag has been supersededby a new parameter executemany_mode which provides support both forpsycopg2’s execute_batch helper as well as the execute_valueshelper.

Possible options for executemany_mode include:

  • None - By default, psycopg2’s extensions are not used, and the usualcursor.executemany() method is used when invoking batches of statements.

  • 'batch' - Uses psycopg2.extras.execute_batch so that multiple copiesof a SQL query, each one corresponding to a parameter set passed toexecutemany(), are joined into a single SQL string separated by asemicolon. This is the same behavior as was provided by theuse_batch_mode=True flag.

  • 'values'- For Core insert() constructs only (including thoseemitted by the ORM automatically), the psycopg2.extras.execute_valuesextension is used so that multiple parameter sets are grouped into a singleINSERT statement and joined together with multiple VALUES expressions. Thismethod requires that the string text of the VALUES clause inside theINSERT statement is manipulated, so is only supported with a compiledinsert() construct where the format is predictable. For all otherconstructs, including plain textual INSERT statements not rendered by theSQLAlchemy expression language compiler, thepsycopg2.extras.execute_batch method is used. It is therefore importantto note that “values” mode implies that “batch” mode is also used forall statements for which “values” mode does not apply.

For both strategies, the executemany_batch_page_size andexecutemany_values_page_size arguments control how many parameter setsshould be represented in each execution. Because “values” mode implies afallback down to “batch” mode for non-INSERT statements, there are twoindependent page size arguments. For each, the default value of None meansto use psycopg2’s defaults, which at the time of this writing are quite low at100. For the execute_values method, a number as high as 10000 may proveto be performant, whereas for execute_batch, as the number representsfull statements repeated, a number closer to the default of 100 is likelymore appropriate:

  1. engine = create_engine(
  2. "postgresql+psycopg2://scott:tiger@host/dbname",
  3. executemany_mode='values',
  4. executemany_values_page_size=10000, executemany_batch_page_size=500)

See also

Executing Multiple Statements - General information on using theConnection object to execute statements in such a way as to makeuse of the DBAPI .executemany() method.

Changed in version 1.3.7: - Added support forpsycopg2.extras.execute_values. The use_batch_mode flag issuperseded by the executemany_mode flag.

Unicode with Psycopg2

By default, the psycopg2 driver uses the psycopg2.extensions.UNICODEextension, such that the DBAPI receives and returns all strings as PythonUnicode objects directly - SQLAlchemy passes these values through withoutchange. Psycopg2 here will encode/decode string values based on thecurrent “client encoding” setting; by default this is the value inthe postgresql.conf file, which often defaults to SQL_ASCII.Typically, this can be changed to utf8, as a more useful default:

  1. # postgresql.conf file
  2.  
  3. # client_encoding = sql_ascii # actually, defaults to database
  4. # encoding
  5. client_encoding = utf8

A second way to affect the client encoding is to set it within Psycopg2locally. SQLAlchemy will call psycopg2’spsycopg2:connection.set_client_encoding() methodon all new connections based on the value passed tocreate_engine() using the client_encoding parameter:

  1. # set_client_encoding() setting;
  2. # works for *all* PostgreSQL versions
  3. engine = create_engine("postgresql://user:pass@host/dbname",
  4. client_encoding='utf8')

This overrides the encoding specified in the PostgreSQL client configuration.When using the parameter in this way, the psycopg2 driver emitsSET client_encoding TO 'utf8' on the connection explicitly, and worksin all PostgreSQL versions.

Note that the client_encoding setting as passed to create_engine()is not the same as the more recently added client_encoding parameternow supported by libpq directly. This is enabled when client_encodingis passed directly to psycopg2.connect(), and from SQLAlchemy is passedusing the create_engine.connect_args parameter:

  1. engine = create_engine(
  2. "postgresql://user:pass@host/dbname",
  3. connect_args={'client_encoding': 'utf8'})
  4.  
  5. # using the query string is equivalent
  6. engine = create_engine("postgresql://user:pass@host/dbname?client_encoding=utf8")

The above parameter was only added to libpq as of version 9.1 of PostgreSQL,so using the previous method is better for cross-version support.

Disabling Native Unicode

SQLAlchemy can also be instructed to skip the usage of the psycopg2UNICODE extension and to instead utilize its own unicode encode/decodeservices, which are normally reserved only for those DBAPIs that don’tfully support unicode directly. Passing use_native_unicode=False tocreate_engine() will disable usage of psycopg2.extensions.UNICODE.SQLAlchemy will instead encode data itself into Python bytestrings on the wayin and coerce from bytes on the way back,using the value of the create_engine() encoding parameter, whichdefaults to utf-8.SQLAlchemy’s own unicode encode/decode functionality is steadily becomingobsolete as most DBAPIs now support unicode fully.

Bound Parameter Styles

The default parameter style for the psycopg2 dialect is “pyformat”, whereSQL is rendered using %(paramname)s style. This format has the limitationthat it does not accommodate the unusual case of parameter names thatactually contain percent or parenthesis symbols; as SQLAlchemy in many casesgenerates bound parameter names based on the name of a column, the presenceof these characters in a column name can lead to problems.

There are two solutions to the issue of a schema.Column that containsone of these characters in its name. One is to specify theschema.Column.key for columns that have such names:

  1. measurement = Table('measurement', metadata,
  2. Column('Size (meters)', Integer, key='size_meters')
  3. )

Above, an INSERT statement such as measurement.insert() will usesize_meters as the parameter name, and a SQL expression such asmeasurement.c.size_meters > 10 will derive the bound parameter namefrom the size_meters key as well.

Changed in version 1.0.0: - SQL expressions will use Column.keyas the source of naming when anonymous bound parameters are createdin SQL expressions; previously, this behavior only applied toTable.insert() and Table.update() parameter names.

The other solution is to use a positional format; psycopg2 allows use of the“format” paramstyle, which can be passed tocreate_engine.paramstyle:

  1. engine = create_engine(
  2. 'postgresql://scott:tiger@localhost:5432/test', paramstyle='format')

With the above engine, instead of a statement like:

  1. INSERT INTO measurement ("Size (meters)") VALUES (%(Size (meters))s)
  2. {'Size (meters)': 1}

we instead see:

  1. INSERT INTO measurement ("Size (meters)") VALUES (%s)
  2. (1, )

Where above, the dictionary style is converted into a tuple with positionalstyle.

Transactions

The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.

Psycopg2 Transaction Isolation Level

As discussed in Transaction Isolation Level,all PostgreSQL dialects support setting of transaction isolation levelboth via the isolation_level parameter passed to create_engine(),as well as the isolation_level argument used byConnection.execution_options(). When using the psycopg2 dialect, theseoptions make use of psycopg2’s set_isolation_level() connection method,rather than emitting a PostgreSQL directive; this is because psycopg2’sAPI-level setting is always emitted at the start of each transaction in anycase.

The psycopg2 dialect supports these constants for isolation level:

  • READ COMMITTED

  • READ UNCOMMITTED

  • REPEATABLE READ

  • SERIALIZABLE

  • AUTOCOMMIT

See also

Transaction Isolation Level

pg8000 Transaction Isolation Level

NOTICE logging

The psycopg2 dialect will log PostgreSQL NOTICE messagesvia the sqlalchemy.dialects.postgresql logger. When this loggeris set to the logging.INFO level, notice messages will be logged:

  1. import logging
  2.  
  3. logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)

Above, it is assumed that logging is configured externally. If this is notthe case, configuration such as logging.basicConfig() must be utilized:

  1. import logging
  2.  
  3. logging.basicConfig() # log messages to stdout
  4. logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)

See also

Logging HOWTO - on the python.org website

HSTORE type

The psycopg2 DBAPI includes an extension to natively handle marshalling ofthe HSTORE type. The SQLAlchemy psycopg2 dialect will enable this extensionby default when psycopg2 version 2.4 or greater is used, andit is detected that the target database has the HSTORE type set up for use.In other words, when the dialect makes the firstconnection, a sequence like the following is performed:

  • Request the available HSTORE oids usingpsycopg2.extras.HstoreAdapter.get_oids().If this function returns a list of HSTORE identifiers, we then determinethat the HSTORE extension is present.This function is skipped if the version of psycopg2 installed isless than version 2.4.

  • If the use_native_hstore flag is at its default of True, andwe’ve detected that HSTORE oids are available, thepsycopg2.extensions.register_hstore() extension is invoked for allconnections.

The register_hstore() extension has the effect of all Pythondictionaries being accepted as parameters regardless of the type of targetcolumn in SQL. The dictionaries are converted by this extension into atextual HSTORE expression. If this behavior is not desired, disable theuse of the hstore extension by setting use_native_hstore to False asfollows:

  1. engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test",
  2. use_native_hstore=False)

The HSTORE type is still supported when thepsycopg2.extensions.register_hstore() extension is not used. It merelymeans that the coercion between Python dictionaries and the HSTOREstring format, on both the parameter side and the result side, will takeplace within SQLAlchemy’s own marshalling logic, and not that of psycopg2which may be more performant.

pg8000

Support for the PostgreSQL database via the pg8000 driver.

DBAPI

Documentation and download information (if applicable) for pg8000 is available at:https://pythonhosted.org/pg8000/

Connecting

Connect String:

  1. postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]

Note

The pg8000 dialect is not tested as part of SQLAlchemy’s continuousintegration and may have unresolved issues. The recommended PostgreSQLdialect is psycopg2.

Unicode

pg8000 will encode / decode string values between it and the server using thePostgreSQL client_encoding parameter; by default this is the value inthe postgresql.conf file, which often defaults to SQL_ASCII.Typically, this can be changed to utf-8, as a more useful default:

  1. #client_encoding = sql_ascii # actually, defaults to database
  2. # encoding
  3. client_encoding = utf8

The client_encoding can be overridden for a session by executing the SQL:

SET CLIENT_ENCODING TO ‘utf8’;

SQLAlchemy will execute this SQL on all new connections based on the valuepassed to create_engine() using the client_encoding parameter:

  1. engine = create_engine(
  2. "postgresql+pg8000://user:pass@host/dbname", client_encoding='utf8')

pg8000 Transaction Isolation Level

The pg8000 dialect offers the same isolation level settings as thatof the psycopg2 dialect:

  • READ COMMITTED

  • READ UNCOMMITTED

  • REPEATABLE READ

  • SERIALIZABLE

  • AUTOCOMMIT

New in version 0.9.5: support for AUTOCOMMIT isolation level when usingpg8000.

See also

Transaction Isolation Level

Psycopg2 Transaction Isolation Level

psycopg2cffi

Support for the PostgreSQL database via the psycopg2cffi driver.

DBAPI

Documentation and download information (if applicable) for psycopg2cffi is available at:http://pypi.python.org/pypi/psycopg2cffi/

Connecting

Connect String:

  1. postgresql+psycopg2cffi://user:password@host:port/dbname[?key=value&key=value...]

psycopg2cffi is an adaptation of psycopg2, using CFFI for the Clayer. This makes it suitable for use in e.g. PyPy. Documentationis as per psycopg2.

New in version 1.0.0.

See also

sqlalchemy.dialects.postgresql.psycopg2

py-postgresql

Support for the PostgreSQL database via the py-postgresql driver.

DBAPI

Documentation and download information (if applicable) for py-postgresql is available at:http://python.projects.pgfoundry.org/

Connecting

Connect String:

  1. postgresql+pypostgresql://user:password@host:port/dbname[?key=value&key=value...]

Note

The pypostgresql dialect is not tested as part of SQLAlchemy’s continuousintegration and may have unresolved issues. The recommended PostgreSQLdriver is psycopg2.

pygresql

Support for the PostgreSQL database via the pygresql driver.

DBAPI

Documentation and download information (if applicable) for pygresql is available at:http://www.pygresql.org/

Connecting

Connect String:

  1. postgresql+pygresql://user:password@host:port/dbname[?key=value&key=value...]

Note

The pygresql dialect is not tested as part of SQLAlchemy’s continuousintegration and may have unresolved issues. The recommended PostgreSQLdialect is psycopg2.

zxjdbc

Support for the PostgreSQL database via the zxJDBC for Jython driver.

DBAPI

Drivers for this database are available at:http://jdbc.postgresql.org/

Connecting

Connect String:

  1. postgresql+zxjdbc://scott:tiger@localhost/db