Describing Databases with MetaData

This section discusses the fundamental Table, Column and MetaData objects.

A collection of metadata entities is stored in an object aptly named MetaData:

  1. from sqlalchemy import *
  2. metadata = MetaData()

MetaData is a container object that keeps together many different features of a database (or multiple databases) being described.

To represent a table, use the Table class. Its two primary arguments are the table name, then the MetaData object which it will be associated with. The remaining positional arguments are mostly Column objects describing each column:

  1. user = Table('user', metadata,
  2. Column('user_id', Integer, primary_key=True),
  3. Column('user_name', String(16), nullable=False),
  4. Column('email_address', String(60)),
  5. Column('nickname', String(50), nullable=False)
  6. )

Above, a table called user is described, which contains four columns. The primary key of the table consists of the user_id column. Multiple columns may be assigned the primary_key=True flag which denotes a multi-column primary key, known as a composite primary key.

Note also that each column describes its datatype using objects corresponding to genericized types, such as Integer and String. SQLAlchemy features dozens of types of varying levels of specificity as well as the ability to create custom types. Documentation on the type system can be found at Column and Data Types.

Accessing Tables and Columns

The MetaData object contains all of the schema constructs we’ve associated with it. It supports a few methods of accessing these table objects, such as the sorted_tables accessor which returns a list of each Table object in order of foreign key dependency (that is, each table is preceded by all tables which it references):

  1. >>> for t in metadata.sorted_tables:
  2. ... print(t.name)
  3. user
  4. user_preference
  5. invoice
  6. invoice_item

In most cases, individual Table objects have been explicitly declared, and these objects are typically accessed directly as module-level variables in an application. Once a Table has been defined, it has a full set of accessors which allow inspection of its properties. Given the following Table definition:

  1. employees = Table('employees', metadata,
  2. Column('employee_id', Integer, primary_key=True),
  3. Column('employee_name', String(60), nullable=False),
  4. Column('employee_dept', Integer, ForeignKey("departments.department_id"))
  5. )

Note the ForeignKey object used in this table - this construct defines a reference to a remote table, and is fully described in Defining Foreign Keys. Methods of accessing information about this table include:

  1. # access the column "EMPLOYEE_ID":
  2. employees.columns.employee_id
  3. # or just
  4. employees.c.employee_id
  5. # via string
  6. employees.c['employee_id']
  7. # iterate through all columns
  8. for c in employees.c:
  9. print(c)
  10. # get the table's primary key columns
  11. for primary_key in employees.primary_key:
  12. print(primary_key)
  13. # get the table's foreign key objects:
  14. for fkey in employees.foreign_keys:
  15. print(fkey)
  16. # access the table's MetaData:
  17. employees.metadata
  18. # access the table's bound Engine or Connection, if its MetaData is bound:
  19. employees.bind
  20. # access a column's name, type, nullable, primary key, foreign key
  21. employees.c.employee_id.name
  22. employees.c.employee_id.type
  23. employees.c.employee_id.nullable
  24. employees.c.employee_id.primary_key
  25. employees.c.employee_dept.foreign_keys
  26. # get the "key" of a column, which defaults to its name, but can
  27. # be any user-defined string:
  28. employees.c.employee_name.key
  29. # access a column's table:
  30. employees.c.employee_id.table is employees
  31. # get the table related by a foreign key
  32. list(employees.c.employee_dept.foreign_keys)[0].column.table

Creating and Dropping Database Tables

Once you’ve defined some Table objects, assuming you’re working with a brand new database one thing you might want to do is issue CREATE statements for those tables and their related constructs (as an aside, it’s also quite possible that you don’t want to do this, if you already have some preferred methodology such as tools included with your database or an existing scripting system - if that’s the case, feel free to skip this section - SQLAlchemy has no requirement that it be used to create your tables).

The usual way to issue CREATE is to use create_all() on the MetaData object. This method will issue queries that first check for the existence of each individual table, and if not found will issue the CREATE statements:

  1. engine = create_engine('sqlite:///:memory:')
  2. metadata = MetaData()
  3. user = Table('user', metadata,
  4. Column('user_id', Integer, primary_key=True),
  5. Column('user_name', String(16), nullable=False),
  6. Column('email_address', String(60), key='email'),
  7. Column('nickname', String(50), nullable=False)
  8. )
  9. user_prefs = Table('user_prefs', metadata,
  10. Column('pref_id', Integer, primary_key=True),
  11. Column('user_id', Integer, ForeignKey("user.user_id"), nullable=False),
  12. Column('pref_name', String(40), nullable=False),
  13. Column('pref_value', String(100))
  14. )
  15. sqlmetadata.create_all(engine)
  16. PRAGMA table_info(user){}
  17. CREATE TABLE user(
  18. user_id INTEGER NOT NULL PRIMARY KEY,
  19. user_name VARCHAR(16) NOT NULL,
  20. email_address VARCHAR(60),
  21. nickname VARCHAR(50) NOT NULL
  22. )
  23. PRAGMA table_info(user_prefs){}
  24. CREATE TABLE user_prefs(
  25. pref_id INTEGER NOT NULL PRIMARY KEY,
  26. user_id INTEGER NOT NULL REFERENCES user(user_id),
  27. pref_name VARCHAR(40) NOT NULL,
  28. pref_value VARCHAR(100)
  29. )

create_all() creates foreign key constraints between tables usually inline with the table definition itself, and for this reason it also generates the tables in order of their dependency. There are options to change this behavior such that ALTER TABLE is used instead.

Dropping all tables is similarly achieved using the drop_all() method. This method does the exact opposite of create_all() - the presence of each table is checked first, and tables are dropped in reverse order of dependency.

Creating and dropping individual tables can be done via the create() and drop() methods of Table. These methods by default issue the CREATE or DROP regardless of the table being present:

  1. engine = create_engine('sqlite:///:memory:')
  2. meta = MetaData()
  3. employees = Table('employees', meta,
  4. Column('employee_id', Integer, primary_key=True),
  5. Column('employee_name', String(60), nullable=False, key='name'),
  6. Column('employee_dept', Integer, ForeignKey("departments.department_id"))
  7. )
  8. sqlemployees.create(engine)
  9. CREATE TABLE employees(
  10. employee_id SERIAL NOT NULL PRIMARY KEY,
  11. employee_name VARCHAR(60) NOT NULL,
  12. employee_dept INTEGER REFERENCES departments(department_id)
  13. )
  14. {}

drop() method:

  1. sqlemployees.drop(engine)
  2. DROP TABLE employees
  3. {}

To enable the “check first for the table existing” logic, add the checkfirst=True argument to create() or drop():

  1. employees.create(engine, checkfirst=True)
  2. employees.drop(engine, checkfirst=False)

Altering Database Objects through Migrations

While SQLAlchemy directly supports emitting CREATE and DROP statements for schema constructs, the ability to alter those constructs, usually via the ALTER statement as well as other database-specific constructs, is outside of the scope of SQLAlchemy itself. While it’s easy enough to emit ALTER statements and similar by hand, such as by passing a text() construct to Connection.execute() or by using the DDL construct, it’s a common practice to automate the maintenance of database schemas in relation to application code using schema migration tools.

The SQLAlchemy project offers the Alembic migration tool for this purpose. Alembic features a highly customizable environment and a minimalistic usage pattern, supporting such features as transactional DDL, automatic generation of “candidate” migrations, an “offline” mode which generates SQL scripts, and support for branch resolution.

Alembic supersedes the SQLAlchemy-Migrate project, which is the original migration tool for SQLAlchemy and is now considered legacy.

Specifying the Schema Name

Most databases support the concept of multiple “schemas” - namespaces that refer to alternate sets of tables and other constructs. The server-side geometry of a “schema” takes many forms, including names of “schemas” under the scope of a particular database (e.g. PostgreSQL schemas), named sibling databases (e.g. MySQL / MariaDB access to other databases on the same server), as well as other concepts like tables owned by other usernames (Oracle, SQL Server) or even names that refer to alternate database files (SQLite ATTACH) or remote servers (Oracle DBLINK with synonyms).

What all of the above approaches have (mostly) in common is that there’s a way of referring to this alternate set of tables using a string name. SQLAlchemy refers to this name as the schema name. Within SQLAlchemy, this is nothing more than a string name which is associated with a Table object, and is then rendered into SQL statements in a manner appropriate to the target database such that the table is referred towards in its remote “schema”, whatever mechanism that is on the target database.

The “schema” name may be associated directly with a Table using the Table.schema argument; when using the ORM with declarative table configuration, the parameter is passed using the __table_args__ parameter dictionary.

The “schema” name may also be associated with the MetaData object where it will take effect automatically for all Table objects associated with that MetaData that don’t otherwise specify their own name. Finally, SQLAlchemy also supports a “dynamic” schema name system that is often used for multi-tenant applications such that a single set of Table metadata may refer to a dynamically configured set of schema names on a per-connection or per-statement basis.

See also

Explicit Schema Name with Declarative Table - schema name specification when using the ORM declarative table configuration

The most basic example is that of the Table.schema argument using a Core Table object as follows:

  1. metadata = MetaData()
  2. financial_info = Table(
  3. 'financial_info',
  4. metadata,
  5. Column('id', Integer, primary_key=True),
  6. Column('value', String(100), nullable=False),
  7. schema='remote_banks'
  8. )

SQL that is rendered using this Table, such as the SELECT statement below, will explicitly qualify the table name financial_info with the remote_banks schema name:

  1. >>> print(select(financial_info))
  2. SELECT remote_banks.financial_info.id, remote_banks.financial_info.value
  3. FROM remote_banks.financial_info

When a Table object is declared with an explicit schema name, it is stored in the internal MetaData namespace using the combination of the schema and table name. We can view this in the MetaData.tables collection by searching for the key 'remote_banks.financial_info':

  1. >>> metadata.tables['remote_banks.financial_info']
  2. Table('financial_info', MetaData(),
  3. Column('id', Integer(), table=<financial_info>, primary_key=True, nullable=False),
  4. Column('value', String(length=100), table=<financial_info>, nullable=False),
  5. schema='remote_banks')

This dotted name is also what must be used when referring to the table for use with the ForeignKey or ForeignKeyConstraint objects, even if the referring table is also in that same schema:

  1. customer = Table(
  2. "customer",
  3. metadata,
  4. Column('id', Integer, primary_key=True),
  5. Column('financial_info_id', ForeignKey("remote_banks.financial_info.id")),
  6. schema='remote_banks'
  7. )

The Table.schema argument may also be used with certain dialects to indicate a multiple-token (e.g. dotted) path to a particular table. This is particularly important on a database such as Microsoft SQL Server where there are often dotted “database/owner” tokens. The tokens may be placed directly in the name at once, such as:

  1. schema="dbo.scott"

See also

Multipart Schema Names - describes use of dotted schema names with the SQL Server dialect.

Specifying a Default Schema Name with MetaData

The MetaData object may also set up an explicit default option for all Table.schema parameters by passing the MetaData.schema argument to the top level MetaData construct:

  1. metadata = MetaData(schema="remote_banks")
  2. financial_info = Table(
  3. 'financial_info',
  4. metadata,
  5. Column('id', Integer, primary_key=True),
  6. Column('value', String(100), nullable=False),
  7. )

Above, for any Table object (or Sequence object directly associated with the MetaData) which leaves the Table.schema parameter at its default of None will instead act as though the parameter were set to the value "remote_banks". This includes that the Table is cataloged in the MetaData using the schema-qualified name, that is:

  1. metadata.tables['remote_banks.financial_info']

When using the ForeignKey or ForeignKeyConstraint objects to refer to this table, either the schema-qualified name or the non-schema-qualified name may be used to refer to the remote_banks.financial_info table:

  1. # either will work:
  2. refers_to_financial_info = Table(
  3. 'refers_to_financial_info',
  4. metadata,
  5. Column('id', Integer, primary_key=True),
  6. Column('fiid', ForeignKey('financial_info.id')),
  7. )
  8. # or
  9. refers_to_financial_info = Table(
  10. 'refers_to_financial_info',
  11. metadata,
  12. Column('id', Integer, primary_key=True),
  13. Column('fiid', ForeignKey('remote_banks.financial_info.id')),
  14. )

When using a MetaData object that sets MetaData.schema, a Table that wishes to specify that it should not be schema qualified may use the special symbol BLANK_SCHEMA:

  1. from sqlalchemy import BLANK_SCHEMA
  2. metadata = MetaData(schema="remote_banks")
  3. financial_info = Table(
  4. 'financial_info',
  5. metadata,
  6. Column('id', Integer, primary_key=True),
  7. Column('value', String(100), nullable=False),
  8. schema=BLANK_SCHEMA # will not use "remote_banks"
  9. )

See also

MetaData.schema

Applying Dynamic Schema Naming Conventions

The names used by the Table.schema parameter may also be applied against a lookup that is dynamic on a per-connection or per-execution basis, so that for example in multi-tenant situations, each transaction or statement may be targeted at a specific set of schema names that change. The section Translation of Schema Names describes how this feature is used.

See also

Translation of Schema Names

Setting a Default Schema for New Connections

The above approaches all refer to methods of including an explicit schema-name within SQL statements. Database connections in fact feature the concept of a “default” schema, which is the name of the “schema” (or database, owner, etc.) that takes place if a table name is not explicitly schema-qualified. These names are usually configured at the login level, such as when connecting to a PostgreSQL database, the default “schema” is called “public”.

There are often cases where the default “schema” cannot be set via the login itself and instead would usefully be configured each time a connection is made, using a statement such as “SET SEARCH_PATH” on PostgreSQL or “ALTER SESSION” on Oracle. These approaches may be achieved by using the PoolEvents.connect() event, which allows access to the DBAPI connection when it is first created. For example, to set the Oracle CURRENT_SCHEMA variable to an alternate name:

  1. from sqlalchemy import event
  2. from sqlalchemy import create_engine
  3. engine = create_engine("oracle+cx_oracle://scott:tiger@tsn_name")
  4. @event.listens_for(engine, "connect", insert=True)
  5. def set_current_schema(dbapi_connection, connection_record):
  6. cursor = dbapi_connection.cursor()
  7. cursor.execute("ALTER SESSION SET CURRENT_SCHEMA=%s" % schema_name)
  8. cursor.close()

Above, the set_current_schema() event handler will take place immediately when the above Engine first connects; as the event is “inserted” into the beginning of the handler list, it will also take place before the dialect’s own event handlers are run, in particular including the one that will determine the “default schema” for the connection.

For other databases, consult the database and/or dialect documentation for specific information regarding how default schemas are configured.

Changed in version 1.4.0b2: The above recipe now works without the need to establish additional event handlers.

See also

Setting Alternate Search Paths on Connect - in the PostgreSQL dialect documentation.

Backend-Specific Options

Table supports database-specific options. For example, MySQL has different table backend types, including “MyISAM” and “InnoDB”. This can be expressed with Table using mysql_engine:

  1. addresses = Table('engine_email_addresses', meta,
  2. Column('address_id', Integer, primary_key=True),
  3. Column('remote_user_id', Integer, ForeignKey(users.c.user_id)),
  4. Column('email_address', String(20)),
  5. mysql_engine='InnoDB'
  6. )

Other backends may support table-level options as well - these would be described in the individual documentation sections for each dialect.

Column, Table, MetaData API

Object NameDescription

Column

Represents a column in a database table.

MetaData

A collection of Table objects and their associated schema constructs.

SchemaItem

Base class for items that define a database schema.

BLANK_SCHEMA

Symbol indicating that a Table or Sequence should have ‘None’ for its schema, even if the parent MetaData has specified a schema.

Table

Represent a table in a database.

ThreadLocalMetaData

A MetaData variant that presents a different bind in every thread.

attribute sqlalchemy.schema.sqlalchemy.schema.sqlalchemy.schema.``BLANK_SCHEMA

Symbol indicating that a Table or Sequence should have ‘None’ for its schema, even if the parent MetaData has specified a schema.

See also

MetaData.schema

Table.schema

Sequence.schema

New in version 1.0.14.

class sqlalchemy.schema.``Column(\args, **kwargs*)

Represents a column in a database table.

Class signature

class sqlalchemy.schema.Column (sqlalchemy.sql.base.DialectKWArgs, sqlalchemy.schema.SchemaItem, sqlalchemy.sql.expression.ColumnClause)

  • method sqlalchemy.schema.Column.__eq__(other)

    inherited from the sqlalchemy.sql.expression.ColumnOperators.__eq__ method of ColumnOperators

    Implement the == operator.

    In a column context, produces the clause a = b. If the target is None, produces a IS NULL.

  • method sqlalchemy.schema.Column.__init__(\args, **kwargs*)

    Construct a new Column object.

    • Parameters

      • name

        The name of this column as represented in the database. This argument may be the first positional argument, or specified via keyword.

        Names which contain no upper case characters will be treated as case insensitive names, and will not be quoted unless they are a reserved word. Names with any number of upper case characters will be quoted and sent exactly. Note that this behavior applies even for databases which standardize upper case names as case insensitive such as Oracle.

        The name field may be omitted at construction time and applied later, at any time before the Column is associated with a Table. This is to support convenient usage within the declarative extension.

      • type_

        The column’s type, indicated using an instance which subclasses TypeEngine. If no arguments are required for the type, the class of the type can be sent as well, e.g.:

        1. # use a type with arguments
        2. Column('data', String(50))
        3. # use no arguments
        4. Column('level', Integer)

        The type argument may be the second positional argument or specified by keyword.

        If the type is None or is omitted, it will first default to the special type NullType. If and when this Column is made to refer to another column using ForeignKey and/or ForeignKeyConstraint, the type of the remote-referenced column will be copied to this column as well, at the moment that the foreign key is resolved against that remote Column object.

        Changed in version 0.9.0: Support for propagation of type to a Column from its ForeignKey object has been improved and should be more reliable and timely.

      • *args – Additional positional arguments include various SchemaItem derived constructs which will be applied as options to the column. These include instances of Constraint, ForeignKey, ColumnDefault, Sequence, Computed Identity. In some cases an equivalent keyword argument is available such as server_default, default and unique.

      • autoincrement

        Set up “auto increment” semantics for an integer primary key column. The default value is the string "auto" which indicates that a single-column primary key that is of an INTEGER type with no stated client-side or python-side defaults should receive auto increment semantics automatically; all other varieties of primary key columns will not. This includes that DDL such as PostgreSQL SERIAL or MySQL AUTO_INCREMENT will be emitted for this column during a table create, as well as that the column is assumed to generate new integer primary key values when an INSERT statement invokes which will be retrieved by the dialect. When used in conjunction with Identity on a dialect that supports it, this parameter has no effect.

        The flag may be set to True to indicate that a column which is part of a composite (e.g. multi-column) primary key should have autoincrement semantics, though note that only one column within a primary key may have this setting. It can also be set to True to indicate autoincrement semantics on a column that has a client-side or server-side default configured, however note that not all dialects can accommodate all styles of default as an “autoincrement”. It can also be set to False on a single-column primary key that has a datatype of INTEGER in order to disable auto increment semantics for that column.

        Changed in version 1.1: The autoincrement flag now defaults to "auto" which indicates autoincrement semantics by default for single-column integer primary keys only; for composite (multi-column) primary keys, autoincrement is never implicitly enabled; as always, autoincrement=True will allow for at most one of those columns to be an “autoincrement” column. autoincrement=True may also be set on a Column that has an explicit client-side or server-side default, subject to limitations of the backend database and dialect.

        The setting only has an effect for columns which are:

        • Integer derived (i.e. INT, SMALLINT, BIGINT).

        • Part of the primary key

        • Not referring to another column via ForeignKey, unless the value is specified as 'ignore_fk':

          1. # turn on autoincrement for this column despite
          2. # the ForeignKey()
          3. Column('id', ForeignKey('other.id'),
          4. primary_key=True, autoincrement='ignore_fk')

          It is typically not desirable to have “autoincrement” enabled on a column that refers to another via foreign key, as such a column is required to refer to a value that originates from elsewhere.

        The setting has these two effects on columns that meet the above criteria:

        • DDL issued for the column will include database-specific keywords intended to signify this column as an “autoincrement” column, such as AUTO INCREMENT on MySQL, SERIAL on PostgreSQL, and IDENTITY on MS-SQL. It does not issue AUTOINCREMENT for SQLite since this is a special SQLite flag that is not required for autoincrementing behavior.

          See also

          SQLite Auto Incrementing Behavior

        • The column will be considered to be available using an “autoincrement” method specific to the backend database, such as calling upon cursor.lastrowid, using RETURNING in an INSERT statement to get at a sequence-generated value, or using special functions such as “SELECT scope_identity()”. These methods are highly specific to the DBAPIs and databases in use and vary greatly, so care should be taken when associating autoincrement=True with a custom default generation function.

      • default

        A scalar, Python callable, or ColumnElement expression representing the default value for this column, which will be invoked upon insert if this column is otherwise not specified in the VALUES clause of the insert. This is a shortcut to using ColumnDefault as a positional argument; see that class for full detail on the structure of the argument.

        Contrast this argument to Column.server_default which creates a default generator on the database side.

        See also

        Column INSERT/UPDATE Defaults

      • doc – optional String that can be used by the ORM or similar to document attributes on the Python side. This attribute does not render SQL comments; use the Column.comment parameter for this purpose.

      • key – An optional string identifier which will identify this Column object on the Table. When a key is provided, this is the only identifier referencing the Column within the application, including ORM attribute mapping; the name field is used only when rendering SQL.

      • index

        When True, indicates that a Index construct will be automatically generated for this Column, which will result in a “CREATE INDEX” statement being emitted for the Table when the DDL create operation is invoked.

        Using this flag is equivalent to making use of the Index construct explicitly at the level of the Table construct itself:

        1. Table(
        2. "some_table",
        3. metadata,
        4. Column("x", Integer),
        5. Index("ix_some_table_x", "x")
        6. )

        To add the Index.unique flag to the Index, set both the Column.unique and Column.index flags to True simultaneously, which will have the effect of rendering the “CREATE UNIQUE INDEX” DDL instruction instead of “CREATE INDEX”.

        The name of the index is generated using the default naming convention which for the Index construct is of the form ix_<tablename>_<columnname>.

        As this flag is intended only as a convenience for the common case of adding a single-column, default configured index to a table definition, explicit use of the Index construct should be preferred for most use cases, including composite indexes that encompass more than one column, indexes with SQL expressions or ordering, backend-specific index configuration options, and indexes that use a specific name.

        Note

        the Column.index attribute on Column does not indicate if this column is indexed or not, only if this flag was explicitly set here. To view indexes on a column, view the Table.indexes collection or use Inspector.get_indexes().

        See also

        Indexes

        Configuring Constraint Naming Conventions

        Column.unique

      • info – Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

      • nullable

        When set to False, will cause the “NOT NULL” phrase to be added when generating DDL for the column. When True, will normally generate nothing (in SQL this defaults to “NULL”), except in some very specific backend-specific edge cases where “NULL” may render explicitly. Defaults to True unless Column.primary_key is also True or the column specifies a Identity, in which case it defaults to False. This parameter is only used when issuing CREATE TABLE statements.

        Note

        When the column specifies a Identity this parameter is in general ignored by the DDL compiler. The PostgreSQL database allows nullable identity column by setting this parameter to True explicitly.

      • onupdate

        A scalar, Python callable, or ClauseElement representing a default value to be applied to the column within UPDATE statements, which will be invoked upon update if this column is not present in the SET clause of the update. This is a shortcut to using ColumnDefault as a positional argument with for_update=True.

        See also

        Column INSERT/UPDATE Defaults - complete discussion of onupdate

      • primary_key – If True, marks this column as a primary key column. Multiple columns can have this flag set to specify composite primary keys. As an alternative, the primary key of a Table can be specified via an explicit PrimaryKeyConstraint object.

      • server_default

        A FetchedValue instance, str, Unicode or text() construct representing the DDL DEFAULT value for the column.

        String types will be emitted as-is, surrounded by single quotes:

        1. Column('x', Text, server_default="val")
        2. x TEXT DEFAULT 'val'

        A text() expression will be rendered as-is, without quotes:

        1. Column('y', DateTime, server_default=text('NOW()'))
        2. y DATETIME DEFAULT NOW()

        Strings and text() will be converted into a DefaultClause object upon initialization.

        Use FetchedValue to indicate that an already-existing column will generate a default value on the database side which will be available to SQLAlchemy for post-fetch after inserts. This construct does not specify any DDL and the implementation is left to the database, such as via a trigger.

        See also

        Server-invoked DDL-Explicit Default Expressions - complete discussion of server side defaults

      • server_onupdate

        A FetchedValue instance representing a database-side default generation function, such as a trigger. This indicates to SQLAlchemy that a newly generated value will be available after updates. This construct does not actually implement any kind of generation function within the database, which instead must be specified separately.

        Warning

        This directive does not currently produce MySQL’s “ON UPDATE CURRENT_TIMESTAMP()” clause. See Rendering ON UPDATE CURRENT TIMESTAMP for MySQL / MariaDB’s explicit_defaults_for_timestamp for background on how to produce this clause.

        See also

        Marking Implicitly Generated Values, timestamps, and Triggered Columns

      • quote – Force quoting of this column’s name on or off, corresponding to True or False. When left at its default of None, the column identifier will be quoted according to whether the name is case sensitive (identifiers with at least one upper case character are treated as case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect.

      • unique

        When True, and the Column.index parameter is left at its default value of False, indicates that a UniqueConstraint construct will be automatically generated for this Column, which will result in a “UNIQUE CONSTRAINT” clause referring to this column being included in the CREATE TABLE statement emitted, when the DDL create operation for the Table object is invoked.

        When this flag is True while the Column.index parameter is simultaneously set to True, the effect instead is that a Index construct which includes the Index.unique parameter set to True is generated. See the documentation for Column.index for additional detail.

        Using this flag is equivalent to making use of the UniqueConstraint construct explicitly at the level of the Table construct itself:

        1. Table(
        2. "some_table",
        3. metadata,
        4. Column("x", Integer),
        5. UniqueConstraint("x")
        6. )

        The UniqueConstraint.name parameter of the unique constraint object is left at its default value of None; in the absence of a naming convention for the enclosing MetaData, the UNIQUE CONSTRAINT construct will be emitted as unnamed, which typically invokes a database-specific naming convention to take place.

        As this flag is intended only as a convenience for the common case of adding a single-column, default configured unique constraint to a table definition, explicit use of the UniqueConstraint construct should be preferred for most use cases, including composite constraints that encompass more than one column, backend-specific index configuration options, and constraints that use a specific name.

        Note

        the Column.unique attribute on Column does not indicate if this column has a unique constraint or not, only if this flag was explicitly set here. To view indexes and unique constraints that may involve this column, view the Table.indexes and/or Table.constraints collections or use Inspector.get_indexes() and/or Inspector.get_unique_constraints()

        See also

        UNIQUE Constraint

        Configuring Constraint Naming Conventions

        Column.index

      • system

        When True, indicates this is a “system” column, that is a column which is automatically made available by the database, and should not be included in the columns list for a CREATE TABLE statement.

        For more elaborate scenarios where columns should be conditionally rendered differently on different backends, consider custom compilation rules for CreateColumn.

      • comment

        Optional string that will render an SQL comment on table creation.

        New in version 1.2: Added the Column.comment parameter to Column.

  • method sqlalchemy.schema.Column.__le__(other)

    inherited from the sqlalchemy.sql.expression.ColumnOperators.__le__ method of ColumnOperators

    Implement the <= operator.

    In a column context, produces the clause a <= b.

  • method sqlalchemy.schema.Column.__lt__(other)

    inherited from the sqlalchemy.sql.expression.ColumnOperators.__lt__ method of ColumnOperators

    Implement the < operator.

    In a column context, produces the clause a < b.

  • method sqlalchemy.schema.Column.__ne__(other)

    inherited from the sqlalchemy.sql.expression.ColumnOperators.__ne__ method of ColumnOperators

    Implement the != operator.

    In a column context, produces the clause a != b. If the target is None, produces a IS NOT NULL.

  • method sqlalchemy.schema.Column.all_()

    inherited from the ColumnOperators.all_() method of ColumnOperators

    Produce a all_() clause against the parent object.

    This operator is only appropriate against a scalar subquery object, or for some backends an column expression that is against the ARRAY type, e.g.:

    1. # postgresql '5 = ALL (somearray)'
    2. expr = 5 == mytable.c.somearray.all_()
    3. # mysql '5 = ALL (SELECT value FROM table)'
    4. expr = 5 == select(table.c.value).scalar_subquery().all_()

    See also

    all_() - standalone version

    any_() - ANY operator

    New in version 1.1.

  • attribute sqlalchemy.schema.Column.anon_key_label

    inherited from the ColumnElement.anon_key_label attribute of ColumnElement

    Provides a constant ‘anonymous key label’ for this ColumnElement.

    Compare to anon_label, except that the “key” of the column, if available, is used to generate the label.

    This is used when a deduplicating key is placed into the columns collection of a selectable.

  • attribute sqlalchemy.schema.Column.anon_label

    inherited from the ColumnElement.anon_label attribute of ColumnElement

    Provides a constant ‘anonymous label’ for this ColumnElement.

    This is a label() expression which will be named at compile time. The same label() is returned each time anon_label is called so that expressions can reference anon_label multiple times, producing the same label name at compile time.

    The compiler uses this function automatically at compile time for expressions that are known to be ‘unnamed’ like binary expressions and function calls.

  • method sqlalchemy.schema.Column.any_()

    inherited from the ColumnOperators.any_() method of ColumnOperators

    Produce a any_() clause against the parent object.

    This operator is only appropriate against a scalar subquery object, or for some backends an column expression that is against the ARRAY type, e.g.:

    1. # postgresql '5 = ANY (somearray)'
    2. expr = 5 == mytable.c.somearray.any_()
    3. # mysql '5 = ANY (SELECT value FROM table)'
    4. expr = 5 == select(table.c.value).scalar_subquery().any_()

    See also

    any_() - standalone version

    all_() - ALL operator

    New in version 1.1.

  • method sqlalchemy.schema.Column.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.
  1. See also
  2. [How do I render SQL expressions as strings, possibly with bound parameters inlined?]($23f306fd0cdd485d.md#faq-sql-expression-string)
  • method sqlalchemy.schema.Column.concat(other)

    inherited from the ColumnOperators.concat() method of ColumnOperators

    Implement the ‘concat’ operator.

    In a column context, produces the clause a || b, or uses the concat() operator on MySQL.

  • method sqlalchemy.schema.Column.contains(other, \*kwargs*)

    inherited from the ColumnOperators.contains() method of ColumnOperators

    Implement the ‘contains’ operator.

    Produces a LIKE expression that tests against a match for the middle of a string value:

    1. column LIKE '%' || <other> || '%'

    E.g.:

    1. stmt = select(sometable).\
    2. where(sometable.c.column.contains("foobar"))

    Since the operator uses LIKE, wildcard characters "%" and "_" that are present inside the <other> expression will behave like wildcards as well. For literal string values, the ColumnOperators.contains.autoescape flag may be set to True to apply escaping to occurrences of these characters within the string value so that they match as themselves and not as wildcard characters. Alternatively, the ColumnOperators.contains.escape parameter will establish a given character as an escape character which can be of use when the target expression is not a literal string.

    • Parameters

      • other – expression to be compared. This is usually a plain string value, but can also be an arbitrary SQL expression. LIKE wildcard characters % and _ are not escaped by default unless the ColumnOperators.contains.autoescape flag is set to True.

      • autoescape

        boolean; when True, establishes an escape character within the LIKE expression, then applies it to all occurrences of "%", "_" and the escape character itself within the comparison value, which is assumed to be a literal string and not a SQL expression.

        An expression such as:

        1. somecolumn.contains("foo%bar", autoescape=True)

        Will render as:

        1. somecolumn LIKE '%' || :param || '%' ESCAPE '/'

        With the value of :param as "foo/%bar".

      • escape

        a character which when given will render with the ESCAPE keyword to establish that character as the escape character. This character can then be placed preceding occurrences of % and _ to allow them to act as themselves and not wildcard characters.

        An expression such as:

        1. somecolumn.contains("foo/%bar", escape="^")

        Will render as:

        1. somecolumn LIKE '%' || :param || '%' ESCAPE '^'

        The parameter may also be combined with ColumnOperators.contains.autoescape:

        1. somecolumn.contains("foo%bar^bat", escape="^", autoescape=True)

        Where above, the given literal parameter will be converted to "foo^%bar^^bat" before being passed to the database.

  1. See also
  2. [`ColumnOperators.startswith()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.startswith "sqlalchemy.sql.expression.ColumnOperators.startswith")
  3. [`ColumnOperators.endswith()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.endswith "sqlalchemy.sql.expression.ColumnOperators.endswith")
  4. [`ColumnOperators.like()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.like "sqlalchemy.sql.expression.ColumnOperators.like")
  • method sqlalchemy.schema.Column.copy(\*kw*)

    Deprecated since version 1.4: The Column.copy() method is deprecated and will be removed in a future release.

  • method sqlalchemy.schema.Column.desc()

    inherited from the ColumnOperators.desc() method of ColumnOperators

    Produce a desc() clause against the parent object.

  • attribute sqlalchemy.schema.Column.dialect_kwargs

    inherited from the DialectKWArgs.dialect_kwargs attribute of DialectKWArgs

    A collection of keyword arguments specified as dialect-specific options to this construct.

    The arguments are present here in their original <dialect>_<kwarg> format. Only arguments that were actually passed are included; unlike the DialectKWArgs.dialect_options collection, which contains all options known by this dialect including defaults.

    The collection is also writable; keys are accepted of the form <dialect>_<kwarg> where the value will be assembled into the list of options.

    New in version 0.9.2.

    Changed in version 0.9.4: The DialectKWArgs.dialect_kwargs collection is now writable.

    See also

    DialectKWArgs.dialect_options - nested dictionary form

  • attribute sqlalchemy.schema.Column.dialect_options

    inherited from the DialectKWArgs.dialect_options attribute of DialectKWArgs

    A collection of keyword arguments specified as dialect-specific options to this construct.

    This is a two-level nested registry, keyed to <dialect_name> and <argument_name>. For example, the postgresql_where argument would be locatable as:

    1. arg = my_object.dialect_options['postgresql']['where']

    New in version 0.9.2.

    See also

    DialectKWArgs.dialect_kwargs - flat dictionary form

  • method sqlalchemy.schema.Column.distinct()

    inherited from the ColumnOperators.distinct() method of ColumnOperators

    Produce a distinct() clause against the parent object.

  • method sqlalchemy.schema.Column.endswith(other, \*kwargs*)

    inherited from the ColumnOperators.endswith() method of ColumnOperators

    Implement the ‘endswith’ operator.

    Produces a LIKE expression that tests against a match for the end of a string value:

    1. column LIKE '%' || <other>

    E.g.:

    1. stmt = select(sometable).\
    2. where(sometable.c.column.endswith("foobar"))

    Since the operator uses LIKE, wildcard characters "%" and "_" that are present inside the <other> expression will behave like wildcards as well. For literal string values, the ColumnOperators.endswith.autoescape flag may be set to True to apply escaping to occurrences of these characters within the string value so that they match as themselves and not as wildcard characters. Alternatively, the ColumnOperators.endswith.escape parameter will establish a given character as an escape character which can be of use when the target expression is not a literal string.

    • Parameters

      • other – expression to be compared. This is usually a plain string value, but can also be an arbitrary SQL expression. LIKE wildcard characters % and _ are not escaped by default unless the ColumnOperators.endswith.autoescape flag is set to True.

      • autoescape

        boolean; when True, establishes an escape character within the LIKE expression, then applies it to all occurrences of "%", "_" and the escape character itself within the comparison value, which is assumed to be a literal string and not a SQL expression.

        An expression such as:

        1. somecolumn.endswith("foo%bar", autoescape=True)

        Will render as:

        1. somecolumn LIKE '%' || :param ESCAPE '/'

        With the value of :param as "foo/%bar".

      • escape

        a character which when given will render with the ESCAPE keyword to establish that character as the escape character. This character can then be placed preceding occurrences of % and _ to allow them to act as themselves and not wildcard characters.

        An expression such as:

        1. somecolumn.endswith("foo/%bar", escape="^")

        Will render as:

        1. somecolumn LIKE '%' || :param ESCAPE '^'

        The parameter may also be combined with ColumnOperators.endswith.autoescape:

        1. somecolumn.endswith("foo%bar^bat", escape="^", autoescape=True)

        Where above, the given literal parameter will be converted to "foo^%bar^^bat" before being passed to the database.

  1. See also
  2. [`ColumnOperators.startswith()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.startswith "sqlalchemy.sql.expression.ColumnOperators.startswith")
  3. [`ColumnOperators.contains()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.contains "sqlalchemy.sql.expression.ColumnOperators.contains")
  4. [`ColumnOperators.like()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.like "sqlalchemy.sql.expression.ColumnOperators.like")
  • attribute sqlalchemy.schema.Column.expression

    inherited from the ColumnElement.expression attribute of ColumnElement

    Return a column expression.

    Part of the inspection interface; returns self.

  • attribute sqlalchemy.schema.Column.foreign_keys = None

    A collection of all ForeignKey marker objects associated with this Column.

    Each object is a member of a Table-wide ForeignKeyConstraint.

    See also

    Table.foreign_keys

  • method sqlalchemy.schema.Column.get_children(column_tables=False, \*kw*)

    inherited from the ColumnClause.get_children() method of ColumnClause

    Return immediate child Traversible elements of this Traversible.

    This is used for visit traversal.

    **kw may contain flags that change the collection that is returned, for example to return a subset of items in order to cut down on larger traversals, or to return child items from a different context (such as schema-level collections instead of clause-level).

  • method sqlalchemy.schema.Column.ilike(other, escape=None)

    inherited from the ColumnOperators.ilike() method of ColumnOperators

    Implement the ilike operator, e.g. case insensitive LIKE.

    In a column context, produces an expression either of the form:

    1. lower(a) LIKE lower(other)

    Or on backends that support the ILIKE operator:

    1. a ILIKE other

    E.g.:

    1. stmt = select(sometable).\
    2. where(sometable.c.column.ilike("%foobar%"))
    • Parameters

      • other – expression to be compared

      • escape

        optional escape character, renders the ESCAPE keyword, e.g.:

        1. somecolumn.ilike("foo/%bar", escape="/")
  1. See also
  2. [`ColumnOperators.like()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.like "sqlalchemy.sql.expression.ColumnOperators.like")
  • method sqlalchemy.schema.Column.in_(other)

    inherited from the ColumnOperators.in_() method of ColumnOperators

    Implement the in operator.

    In a column context, produces the clause column IN <other>.

    The given parameter other may be:

    • A list of literal values, e.g.:

      1. stmt.where(column.in_([1, 2, 3]))

      In this calling form, the list of items is converted to a set of bound parameters the same length as the list given:

      1. WHERE COL IN (?, ?, ?)
    • A list of tuples may be provided if the comparison is against a tuple_() containing multiple expressions:

      1. from sqlalchemy import tuple_
      2. stmt.where(tuple_(col1, col2).in_([(1, 10), (2, 20), (3, 30)]))
    • An empty list, e.g.:

      1. stmt.where(column.in_([]))

      In this calling form, the expression renders an “empty set” expression. These expressions are tailored to individual backends and are generally trying to get an empty SELECT statement as a subquery. Such as on SQLite, the expression is:

      1. WHERE col IN (SELECT 1 FROM (SELECT 1) WHERE 1!=1)

      Changed in version 1.4: empty IN expressions now use an execution-time generated SELECT subquery in all cases.

    • A bound parameter, e.g. bindparam(), may be used if it includes the bindparam.expanding flag:

      1. stmt.where(column.in_(bindparam('value', expanding=True)))

      In this calling form, the expression renders a special non-SQL placeholder expression that looks like:

      1. WHERE COL IN ([EXPANDING_value])

      This placeholder expression is intercepted at statement execution time to be converted into the variable number of bound parameter form illustrated earlier. If the statement were executed as:

      1. connection.execute(stmt, {"value": [1, 2, 3]})

      The database would be passed a bound parameter for each value:

      1. WHERE COL IN (?, ?, ?)

      New in version 1.2: added “expanding” bound parameters

      If an empty list is passed, a special “empty list” expression, which is specific to the database in use, is rendered. On SQLite this would be:

      1. WHERE COL IN (SELECT 1 FROM (SELECT 1) WHERE 1!=1)

      New in version 1.3: “expanding” bound parameters now support empty lists

    • a select() construct, which is usually a correlated scalar select:

      1. stmt.where(
      2. column.in_(
      3. select(othertable.c.y).
      4. where(table.c.x == othertable.c.x)
      5. )
      6. )

      In this calling form, ColumnOperators.in_() renders as given:

      1. WHERE COL IN (SELECT othertable.y
      2. FROM othertable WHERE othertable.x = table.x)
    • Parameters

      other – a list of literals, a select() construct, or a bindparam() construct that includes the bindparam.expanding flag set to True.

  • attribute sqlalchemy.schema.Column.index = None

    The value of the Column.index parameter.

    Does not indicate if this Column is actually indexed or not; use Table.indexes.

    See also

    Table.indexes

  • attribute sqlalchemy.schema.Column.info

    inherited from the SchemaItem.info attribute of SchemaItem

    Info dictionary associated with the object, allowing user-defined data to be associated with this SchemaItem.

    The dictionary is automatically generated when first accessed. It can also be specified in the constructor of some objects, such as Table and Column.

  • method sqlalchemy.schema.Column.is_(other)

    inherited from the ColumnOperators.is_() method of ColumnOperators

    Implement the IS operator.

    Normally, IS is generated automatically when comparing to a value of None, which resolves to NULL. However, explicit usage of IS may be desirable if comparing to boolean values on certain platforms.

    See also

    ColumnOperators.is_not()

  • method sqlalchemy.schema.Column.is_distinct_from(other)

    inherited from the ColumnOperators.is_distinct_from() method of ColumnOperators

    Implement the IS DISTINCT FROM operator.

    Renders “a IS DISTINCT FROM b” on most platforms; on some such as SQLite may render “a IS NOT b”.

    New in version 1.1.

  • method sqlalchemy.schema.Column.is_not(other)

    inherited from the ColumnOperators.is_not() method of ColumnOperators

    Implement the IS NOT operator.

    Normally, IS NOT is generated automatically when comparing to a value of None, which resolves to NULL. However, explicit usage of IS NOT may be desirable if comparing to boolean values on certain platforms.

    Changed in version 1.4: The is_not() operator is renamed from isnot() in previous releases. The previous name remains available for backwards compatibility.

    See also

    ColumnOperators.is_()

  • method sqlalchemy.schema.Column.is_not_distinct_from(other)

    inherited from the ColumnOperators.is_not_distinct_from() method of ColumnOperators

    Implement the IS NOT DISTINCT FROM operator.

    Renders “a IS NOT DISTINCT FROM b” on most platforms; on some such as SQLite may render “a IS b”.

    Changed in version 1.4: The is_not_distinct_from() operator is renamed from isnot_distinct_from() in previous releases. The previous name remains available for backwards compatibility.

    New in version 1.1.

  • method sqlalchemy.schema.Column.isnot(other)

    inherited from the ColumnOperators.isnot() method of ColumnOperators

    Implement the IS NOT operator.

    Normally, IS NOT is generated automatically when comparing to a value of None, which resolves to NULL. However, explicit usage of IS NOT may be desirable if comparing to boolean values on certain platforms.

    Changed in version 1.4: The is_not() operator is renamed from isnot() in previous releases. The previous name remains available for backwards compatibility.

    See also

    ColumnOperators.is_()

  • method sqlalchemy.schema.Column.isnot_distinct_from(other)

    inherited from the ColumnOperators.isnot_distinct_from() method of ColumnOperators

    Implement the IS NOT DISTINCT FROM operator.

    Renders “a IS NOT DISTINCT FROM b” on most platforms; on some such as SQLite may render “a IS b”.

    Changed in version 1.4: The is_not_distinct_from() operator is renamed from isnot_distinct_from() in previous releases. The previous name remains available for backwards compatibility.

    New in version 1.1.

  • attribute sqlalchemy.schema.Column.kwargs

    inherited from the DialectKWArgs.kwargs attribute of DialectKWArgs

    A synonym for DialectKWArgs.dialect_kwargs.

  • method sqlalchemy.schema.Column.label(name)

    inherited from the ColumnElement.label() method of ColumnElement

    Produce a column label, i.e. <columnname> AS <name>.

    This is a shortcut to the label() function.

    If ‘name’ is None, an anonymous label name will be generated.

  • method sqlalchemy.schema.Column.like(other, escape=None)

    inherited from the ColumnOperators.like() method of ColumnOperators

    Implement the like operator.

    In a column context, produces the expression:

    1. a LIKE other

    E.g.:

    1. stmt = select(sometable).\
    2. where(sometable.c.column.like("%foobar%"))
    • Parameters

      • other – expression to be compared

      • escape

        optional escape character, renders the ESCAPE keyword, e.g.:

        1. somecolumn.like("foo/%bar", escape="/")
  1. See also
  2. [`ColumnOperators.ilike()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.ilike "sqlalchemy.sql.expression.ColumnOperators.ilike")
  • method sqlalchemy.schema.Column.match(other, \*kwargs*)

    inherited from the ColumnOperators.match() method of ColumnOperators

    Implements a database-specific ‘match’ operator.

    ColumnOperators.match() attempts to resolve to a MATCH-like function or operator provided by the backend. Examples include:

    • PostgreSQL - renders x @@ to_tsquery(y)

    • MySQL - renders MATCH (x) AGAINST (y IN BOOLEAN MODE)

    • Oracle - renders CONTAINS(x, y)

    • other backends may provide special implementations.

    • Backends without any special implementation will emit the operator as “MATCH”. This is compatible with SQLite, for example.

  • class memoized_attribute(fget, doc=None)

    A read-only @property that is only evaluated once.

  • method sqlalchemy.schema.Column.classmethod memoized_instancemethod(fn)

    inherited from the HasMemoized.memoized_instancemethod() method of HasMemoized

    Decorate a method memoize its return value.

  • method sqlalchemy.schema.Column.not_ilike(other, escape=None)

    inherited from the ColumnOperators.not_ilike() method of ColumnOperators

    implement the NOT ILIKE operator.

    This is equivalent to using negation with ColumnOperators.ilike(), i.e. ~x.ilike(y).

    Changed in version 1.4: The not_ilike() operator is renamed from notilike() in previous releases. The previous name remains available for backwards compatibility.

    See also

    ColumnOperators.ilike()

  • method sqlalchemy.schema.Column.not_in(other)

    inherited from the ColumnOperators.not_in() method of ColumnOperators

    implement the NOT IN operator.

    This is equivalent to using negation with ColumnOperators.in_(), i.e. ~x.in_(y).

    In the case that other is an empty sequence, the compiler produces an “empty not in” expression. This defaults to the expression “1 = 1” to produce true in all cases. The create_engine.empty_in_strategy may be used to alter this behavior.

    Changed in version 1.4: The not_in() operator is renamed from notin_() in previous releases. The previous name remains available for backwards compatibility.

    Changed in version 1.2: The ColumnOperators.in_() and ColumnOperators.not_in() operators now produce a “static” expression for an empty IN sequence by default.

    See also

    ColumnOperators.in_()

  • method sqlalchemy.schema.Column.not_like(other, escape=None)

    inherited from the ColumnOperators.not_like() method of ColumnOperators

    implement the NOT LIKE operator.

    This is equivalent to using negation with ColumnOperators.like(), i.e. ~x.like(y).

    Changed in version 1.4: The not_like() operator is renamed from notlike() in previous releases. The previous name remains available for backwards compatibility.

    See also

    ColumnOperators.like()

  • method sqlalchemy.schema.Column.notilike(other, escape=None)

    inherited from the ColumnOperators.notilike() method of ColumnOperators

    implement the NOT ILIKE operator.

    This is equivalent to using negation with ColumnOperators.ilike(), i.e. ~x.ilike(y).

    Changed in version 1.4: The not_ilike() operator is renamed from notilike() in previous releases. The previous name remains available for backwards compatibility.

    See also

    ColumnOperators.ilike()

  • method sqlalchemy.schema.Column.notin_(other)

    inherited from the ColumnOperators.notin_() method of ColumnOperators

    implement the NOT IN operator.

    This is equivalent to using negation with ColumnOperators.in_(), i.e. ~x.in_(y).

    In the case that other is an empty sequence, the compiler produces an “empty not in” expression. This defaults to the expression “1 = 1” to produce true in all cases. The create_engine.empty_in_strategy may be used to alter this behavior.

    Changed in version 1.4: The not_in() operator is renamed from notin_() in previous releases. The previous name remains available for backwards compatibility.

    Changed in version 1.2: The ColumnOperators.in_() and ColumnOperators.not_in() operators now produce a “static” expression for an empty IN sequence by default.

    See also

    ColumnOperators.in_()

  • method sqlalchemy.schema.Column.notlike(other, escape=None)

    inherited from the ColumnOperators.notlike() method of ColumnOperators

    implement the NOT LIKE operator.

    This is equivalent to using negation with ColumnOperators.like(), i.e. ~x.like(y).

    Changed in version 1.4: The not_like() operator is renamed from notlike() in previous releases. The previous name remains available for backwards compatibility.

    See also

    ColumnOperators.like()

  • method sqlalchemy.schema.Column.nulls_first()

    inherited from the ColumnOperators.nulls_first() method of ColumnOperators

    Produce a nulls_first() clause against the parent object.

    Changed in version 1.4: The nulls_first() operator is renamed from nullsfirst() in previous releases. The previous name remains available for backwards compatibility.

  • method sqlalchemy.schema.Column.nulls_last()

    inherited from the ColumnOperators.nulls_last() method of ColumnOperators

    Produce a nulls_last() clause against the parent object.

    Changed in version 1.4: The nulls_last() operator is renamed from nullslast() in previous releases. The previous name remains available for backwards compatibility.

  • method sqlalchemy.schema.Column.nullsfirst()

    inherited from the ColumnOperators.nullsfirst() method of ColumnOperators

    Produce a nulls_first() clause against the parent object.

    Changed in version 1.4: The nulls_first() operator is renamed from nullsfirst() in previous releases. The previous name remains available for backwards compatibility.

  • method sqlalchemy.schema.Column.nullslast()

    inherited from the ColumnOperators.nullslast() method of ColumnOperators

    Produce a nulls_last() clause against the parent object.

    Changed in version 1.4: The nulls_last() operator is renamed from nullslast() in previous releases. The previous name remains available for backwards compatibility.

  • method sqlalchemy.schema.Column.op(opstring, precedence=0, is_comparison=False, return_type=None)

    inherited from the Operators.op() method of Operators

    Produce a generic operator function.

    e.g.:

    1. somecolumn.op("*")(5)

    produces:

    1. somecolumn * 5

    This function can also be used to make bitwise operators explicit. For example:

    1. somecolumn.op('&')(0xff)

    is a bitwise AND of the value in somecolumn.

    • Parameters

      • operator – a string which will be output as the infix operator between this element and the expression passed to the generated function.

      • precedence – precedence to apply to the operator, when parenthesizing expressions. A lower number will cause the expression to be parenthesized when applied against another operator with higher precedence. The default value of 0 is lower than all operators except for the comma (,) and AS operators. A value of 100 will be higher or equal to all operators, and -100 will be lower than or equal to all operators.

      • is_comparison

        if True, the operator will be considered as a “comparison” operator, that is which evaluates to a boolean true/false value, like ==, >, etc. This flag should be set so that ORM relationships can establish that the operator is a comparison operator when used in a custom join condition.

        New in version 0.9.2: - added the Operators.op.is_comparison flag.

      • return_type – a TypeEngine class or object that will force the return type of an expression produced by this operator to be of that type. By default, operators that specify Operators.op.is_comparison will resolve to Boolean, and those that do not will be of the same type as the left-hand operand.

  1. See also
  2. [Redefining and Creating New Operators]($993b6f7a0d78cd7b.md#types-operators)
  3. [Using custom operators in join conditions]($e1f42b7742e49253.md#relationship-custom-operator)
  • method sqlalchemy.schema.Column.operate(op, \other, **kwargs*)

    inherited from the ColumnElement.operate() method of ColumnElement

    Operate on an argument.

    This is the lowest level of operation, raises NotImplementedError by default.

    Overriding this on a subclass can allow common behavior to be applied to all operations. For example, overriding ColumnOperators to apply func.lower() to the left and right side:

    1. class MyComparator(ColumnOperators):
    2. def operate(self, op, other):
    3. return op(func.lower(self), func.lower(other))
    • Parameters

      • op – Operator callable.

      • *other – the ‘other’ side of the operation. Will be a single scalar for most operations.

      • **kwargs – modifiers. These may be passed by special operators such as ColumnOperators.contains().

  • method sqlalchemy.schema.Column.references(column)

    Return True if this Column references the given column via foreign key.

  • method sqlalchemy.schema.Column.regexp_match(pattern, flags=None)

    inherited from the ColumnOperators.regexp_match() method of ColumnOperators

    Implements a database-specific ‘regexp match’ operator.

    E.g.:

    1. stmt = select(table.c.some_column).where(
    2. table.c.some_column.regexp_match('^(b|c)')
    3. )

    ColumnOperators.regexp_match() attempts to resolve to a REGEXP-like function or operator provided by the backend, however the specific regular expression syntax and flags available are not backend agnostic.

    Examples include:

    • PostgreSQL - renders x ~ y or x !~ y when negated.

    • Oracle - renders REGEXP_LIKE(x, y)

    • SQLite - uses SQLite’s REGEXP placeholder operator and calls into the Python re.match() builtin.

    • other backends may provide special implementations.

    • Backends without any special implementation will emit the operator as “REGEXP” or “NOT REGEXP”. This is compatible with SQLite and MySQL, for example.

    Regular expression support is currently implemented for Oracle, PostgreSQL, MySQL and MariaDB. Partial support is available for SQLite. Support among third-party dialects may vary.

    • Parameters

      • pattern – The regular expression pattern string or column clause.

      • flags – Any regular expression string flags to apply. Flags tend to be backend specific. It can be a string or a column clause. Some backends, like PostgreSQL and MariaDB, may alternatively specify the flags as part of the pattern. When using the ignore case flag ‘i’ in PostgreSQL, the ignore case regexp match operator ~* or !~* will be used.

  1. New in version 1.4.
  2. See also
  3. [`ColumnOperators.regexp_replace()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.regexp_replace "sqlalchemy.sql.expression.ColumnOperators.regexp_replace")
  • method sqlalchemy.schema.Column.regexp_replace(pattern, replacement, flags=None)

    inherited from the ColumnOperators.regexp_replace() method of ColumnOperators

    Implements a database-specific ‘regexp replace’ operator.

    E.g.:

    1. stmt = select(
    2. table.c.some_column.regexp_replace(
    3. 'b(..)',
    4. 'XY',
    5. flags='g'
    6. )
    7. )

    ColumnOperators.regexp_replace() attempts to resolve to a REGEXP_REPLACE-like function provided by the backend, that usually emit the function REGEXP_REPLACE(). However, the specific regular expression syntax and flags available are not backend agnostic.

    Regular expression replacement support is currently implemented for Oracle, PostgreSQL, MySQL 8 or greater and MariaDB. Support among third-party dialects may vary.

    • Parameters

      • pattern – The regular expression pattern string or column clause.

      • pattern – The replacement string or column clause.

      • flags – Any regular expression string flags to apply. Flags tend to be backend specific. It can be a string or a column clause. Some backends, like PostgreSQL and MariaDB, may alternatively specify the flags as part of the pattern.

  1. New in version 1.4.
  2. See also
  3. [`ColumnOperators.regexp_match()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.regexp_match "sqlalchemy.sql.expression.ColumnOperators.regexp_match")
  • method sqlalchemy.schema.Column.reverse_operate(op, other, \*kwargs*)

    inherited from the ColumnElement.reverse_operate() method of ColumnElement

    Reverse operate on an argument.

    Usage is the same as operate().

  • method sqlalchemy.schema.Column.self_group(against=None)

    inherited from the ColumnElement.self_group() method of ColumnElement

    Apply a ‘grouping’ to this ClauseElement.

    This method is overridden by subclasses to return a “grouping” construct, i.e. parenthesis. In particular it’s used by “binary” expressions to provide a grouping around themselves when placed into a larger expression, as well as by select() constructs when placed into the FROM clause of another select(). (Note that subqueries should be normally created using the Select.alias() method, as many platforms require nested SELECT statements to be named).

    As expressions are composed together, the application of self_group() is automatic - end-user code should never need to use this method directly. Note that SQLAlchemy’s clause constructs take operator precedence into account - so parenthesis might not be needed, for example, in an expression like x OR (y AND z) - AND takes precedence over OR.

    The base self_group() method of ClauseElement just returns self.

  • method sqlalchemy.schema.Column.shares_lineage(othercolumn)

    inherited from the ColumnElement.shares_lineage() method of ColumnElement

    Return True if the given ColumnElement has a common ancestor to this ColumnElement.

  • method sqlalchemy.schema.Column.startswith(other, \*kwargs*)

    inherited from the ColumnOperators.startswith() method of ColumnOperators

    Implement the startswith operator.

    Produces a LIKE expression that tests against a match for the start of a string value:

    1. column LIKE <other> || '%'

    E.g.:

    1. stmt = select(sometable).\
    2. where(sometable.c.column.startswith("foobar"))

    Since the operator uses LIKE, wildcard characters "%" and "_" that are present inside the <other> expression will behave like wildcards as well. For literal string values, the ColumnOperators.startswith.autoescape flag may be set to True to apply escaping to occurrences of these characters within the string value so that they match as themselves and not as wildcard characters. Alternatively, the ColumnOperators.startswith.escape parameter will establish a given character as an escape character which can be of use when the target expression is not a literal string.

    • Parameters

      • other – expression to be compared. This is usually a plain string value, but can also be an arbitrary SQL expression. LIKE wildcard characters % and _ are not escaped by default unless the ColumnOperators.startswith.autoescape flag is set to True.

      • autoescape

        boolean; when True, establishes an escape character within the LIKE expression, then applies it to all occurrences of "%", "_" and the escape character itself within the comparison value, which is assumed to be a literal string and not a SQL expression.

        An expression such as:

        1. somecolumn.startswith("foo%bar", autoescape=True)

        Will render as:

        1. somecolumn LIKE :param || '%' ESCAPE '/'

        With the value of :param as "foo/%bar".

      • escape

        a character which when given will render with the ESCAPE keyword to establish that character as the escape character. This character can then be placed preceding occurrences of % and _ to allow them to act as themselves and not wildcard characters.

        An expression such as:

        1. somecolumn.startswith("foo/%bar", escape="^")

        Will render as:

        1. somecolumn LIKE :param || '%' ESCAPE '^'

        The parameter may also be combined with ColumnOperators.startswith.autoescape:

        1. somecolumn.startswith("foo%bar^bat", escape="^", autoescape=True)

        Where above, the given literal parameter will be converted to "foo^%bar^^bat" before being passed to the database.

  1. See also
  2. [`ColumnOperators.endswith()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.endswith "sqlalchemy.sql.expression.ColumnOperators.endswith")
  3. [`ColumnOperators.contains()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.contains "sqlalchemy.sql.expression.ColumnOperators.contains")
  4. [`ColumnOperators.like()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnOperators.like "sqlalchemy.sql.expression.ColumnOperators.like")

class sqlalchemy.schema.``MetaData(bind=None, schema=None, quote_schema=None, naming_convention=None, info=None)

A collection of Table objects and their associated schema constructs.

Holds a collection of Table objects as well as an optional binding to an Engine or Connection. If bound, the Table objects in the collection and their columns may participate in implicit SQL execution.

The Table objects themselves are stored in the MetaData.tables dictionary.

MetaData is a thread-safe object for read operations. Construction of new tables within a single MetaData object, either explicitly or via reflection, may not be completely thread-safe.

See also

Describing Databases with MetaData - Introduction to database metadata

Class signature

class sqlalchemy.schema.MetaData (sqlalchemy.schema.SchemaItem)

  • method sqlalchemy.schema.MetaData.__init__(bind=None, schema=None, quote_schema=None, naming_convention=None, info=None)

    Create a new MetaData object.

    • Parameters

      • bind

        An Engine or Connection to bind to. May also be a string or URL instance, these are passed to create_engine() and this MetaData will be bound to the resulting engine.

        Deprecated since version 1.4: The MetaData.bind argument is deprecated and will be removed in SQLAlchemy 2.0.

      • reflect – Optional, automatically load all tables from the bound database. Defaults to False. MetaData.bind is required when this option is set.

      • schema

        The default schema to use for the Table, Sequence, and potentially other objects associated with this MetaData. Defaults to None.

        See also

        Specifying a Default Schema Name with MetaData - details on how the MetaData.schema parameter is used.

        Table.schema

        Sequence.schema

      • quote_schema – Sets the quote_schema flag for those Table, Sequence, and other objects which make usage of the local schema name.

      • info

        Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

        New in version 1.0.0.

      • naming_convention

        a dictionary referring to values which will establish default naming conventions for Constraint and Index objects, for those objects which are not given a name explicitly.

        The keys of this dictionary may be:

        • a constraint or Index class, e.g. the UniqueConstraint, ForeignKeyConstraint class, the Index class

        • a string mnemonic for one of the known constraint classes; "fk", "pk", "ix", "ck", "uq" for foreign key, primary key, index, check, and unique constraint, respectively.

        • the string name of a user-defined “token” that can be used to define new naming tokens.

        The values associated with each “constraint class” or “constraint mnemonic” key are string naming templates, such as "uq_%(table_name)s_%(column_0_name)s", which describe how the name should be composed. The values associated with user-defined “token” keys should be callables of the form fn(constraint, table), which accepts the constraint/index object and Table as arguments, returning a string result.

        The built-in names are as follows, some of which may only be available for certain types of constraint:

        • %(table_name)s - the name of the Table object associated with the constraint.

        • %(referred_table_name)s - the name of the Table object associated with the referencing target of a ForeignKeyConstraint.

        • %(column_0_name)s - the name of the Column at index position “0” within the constraint.

        • %(column_0N_name)s - the name of all Column objects in order within the constraint, joined without a separator.

        • %(column_0_N_name)s - the name of all Column objects in order within the constraint, joined with an underscore as a separator.

        • %(column_0_label)s, %(column_0N_label)s, %(column_0_N_label)s - the label of either the zeroth Column or all Columns, separated with or without an underscore

        • %(column_0_key)s, %(column_0N_key)s, %(column_0_N_key)s - the key of either the zeroth Column or all Columns, separated with or without an underscore

        • %(referred_column_0_name)s, %(referred_column_0N_name)s %(referred_column_0_N_name)s, %(referred_column_0_key)s, %(referred_column_0N_key)s, … column tokens which render the names/keys/labels of columns that are referenced by a ForeignKeyConstraint.

        • %(constraint_name)s - a special key that refers to the existing name given to the constraint. When this key is present, the Constraint object’s existing name will be replaced with one that is composed from template string that uses this token. When this token is present, it is required that the Constraint is given an explicit name ahead of time.

        • user-defined: any additional token may be implemented by passing it along with a fn(constraint, table) callable to the naming_convention dictionary.

        New in version 1.3.0: - added new %(column_0N_name)s, %(column_0_N_name)s, and related tokens that produce concatenations of names, keys, or labels for all columns referred to by a given constraint.

        See also

        Configuring Constraint Naming Conventions - for detailed usage examples.

  • attribute sqlalchemy.schema.MetaData.bind

    An Engine or Connection to which this MetaData is bound.

    Typically, a Engine is assigned to this attribute so that “implicit execution” may be used, or alternatively as a means of providing engine binding information to an ORM Session object:

    1. engine = create_engine("someurl://")
    2. metadata.bind = engine

    See also

    Connectionless Execution, Implicit Execution - background on “bound metadata”

  • method sqlalchemy.schema.MetaData.clear()

    Clear all Table objects from this MetaData.

  • method sqlalchemy.schema.MetaData.create_all(bind=None, tables=None, checkfirst=True)

    Create all tables stored in this metadata.

    Conditional by default, will not attempt to recreate tables already present in the target database.

    • Parameters

      • bind

        A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any.

        Note

        the “bind” argument will be required in SQLAlchemy 2.0.

      • tables – Optional list of Table objects, which is a subset of the total tables in the MetaData (others are ignored).

      • checkfirst – Defaults to True, don’t issue CREATEs for tables already present in the target database.

  • method sqlalchemy.schema.MetaData.drop_all(bind=None, tables=None, checkfirst=True)

    Drop all tables stored in this metadata.

    Conditional by default, will not attempt to drop tables not present in the target database.

    • Parameters

      • bind

        A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any.

        Note

        the “bind” argument will be required in SQLAlchemy 2.0.

      • tables – Optional list of Table objects, which is a subset of the total tables in the MetaData (others are ignored).

      • checkfirst – Defaults to True, only issue DROPs for tables confirmed to be present in the target database.

  • method sqlalchemy.schema.MetaData.is_bound()

    True if this MetaData is bound to an Engine or Connection.

  • method sqlalchemy.schema.MetaData.reflect(bind=None, schema=None, views=False, only=None, extend_existing=False, autoload_replace=True, resolve_fks=True, \*dialect_kwargs*)

    Load all available table definitions from the database.

    Automatically creates Table entries in this MetaData for any table available in the database but not yet present in the MetaData. May be called multiple times to pick up tables recently added to the database, however no special action is taken if a table in this MetaData no longer exists in the database.

    • Parameters

      • bind

        A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any.

        Note

        the “bind” argument will be required in SQLAlchemy 2.0.

      • schema – Optional, query and reflect tables from an alternate schema. If None, the schema associated with this MetaData is used, if any.

      • views – If True, also reflect views.

      • only

        Optional. Load only a sub-set of available named tables. May be specified as a sequence of names or a callable.

        If a sequence of names is provided, only those tables will be reflected. An error is raised if a table is requested but not available. Named tables already present in this MetaData are ignored.

        If a callable is provided, it will be used as a boolean predicate to filter the list of potential table names. The callable is called with a table name and this MetaData instance as positional arguments and should return a true value for any table to reflect.

      • extend_existing

        Passed along to each Table as Table.extend_existing.

        New in version 0.9.1.

      • autoload_replace

        Passed along to each Table as Table.autoload_replace.

        New in version 0.9.1.

      • resolve_fks

        if True, reflect Table objects linked to ForeignKey objects located in each Table. For MetaData.reflect(), this has the effect of reflecting related tables that might otherwise not be in the list of tables being reflected, for example if the referenced table is in a different schema or is omitted via the MetaData.reflect.only parameter. When False, ForeignKey objects are not followed to the Table in which they link, however if the related table is also part of the list of tables that would be reflected in any case, the ForeignKey object will still resolve to its related Table after the MetaData.reflect() operation is complete. Defaults to True.

        New in version 1.3.0.

        See also

        Table.resolve_fks

      • **dialect_kwargs

        Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

        New in version 0.9.2: - Added MetaData.reflect.**dialect_kwargs to support dialect-level reflection options for all Table objects reflected.

  • method sqlalchemy.schema.MetaData.remove(table)

    Remove the given Table object from this MetaData.

  • attribute sqlalchemy.schema.MetaData.sorted_tables

    Returns a list of Table objects sorted in order of foreign key dependency.

    The sorting will place Table objects that have dependencies first, before the dependencies themselves, representing the order in which they can be created. To get the order in which the tables would be dropped, use the reversed() Python built-in.

    Warning

    The MetaData.sorted_tables attribute cannot by itself accommodate automatic resolution of dependency cycles between tables, which are usually caused by mutually dependent foreign key constraints. When these cycles are detected, the foreign keys of these tables are omitted from consideration in the sort. A warning is emitted when this condition occurs, which will be an exception raise in a future release. Tables which are not part of the cycle will still be returned in dependency order.

    To resolve these cycles, the ForeignKeyConstraint.use_alter parameter may be applied to those constraints which create a cycle. Alternatively, the sort_tables_and_constraints() function will automatically return foreign key constraints in a separate collection when cycles are detected so that they may be applied to a schema separately.

    Changed in version 1.3.17: - a warning is emitted when MetaData.sorted_tables cannot perform a proper sort due to cyclical dependencies. This will be an exception in a future release. Additionally, the sort will continue to return other tables not involved in the cycle in dependency order which was not the case previously.

    See also

    sort_tables()

    sort_tables_and_constraints()

    MetaData.tables

    Inspector.get_table_names()

    Inspector.get_sorted_table_and_fkc_names()

  • attribute sqlalchemy.schema.MetaData.tables = None

    A dictionary of Table objects keyed to their name or “table key”.

    The exact key is that determined by the Table.key attribute; for a table with no Table.schema attribute, this is the same as Table.name. For a table with a schema, it is typically of the form schemaname.tablename.

    See also

    MetaData.sorted_tables

class sqlalchemy.schema.``SchemaItem

Base class for items that define a database schema.

Class signature

class sqlalchemy.schema.SchemaItem (sqlalchemy.sql.expression.SchemaEventTarget, sqlalchemy.sql.visitors.Traversible)

  • attribute sqlalchemy.schema.SchemaItem.info

    Info dictionary associated with the object, allowing user-defined data to be associated with this SchemaItem.

    The dictionary is automatically generated when first accessed. It can also be specified in the constructor of some objects, such as Table and Column.

class sqlalchemy.schema.``Table(\args, **kw*)

Represent a table in a database.

e.g.:

  1. mytable = Table("mytable", metadata,
  2. Column('mytable_id', Integer, primary_key=True),
  3. Column('value', String(50))
  4. )

The Table object constructs a unique instance of itself based on its name and optional schema name within the given MetaData object. Calling the Table constructor with the same name and same MetaData argument a second time will return the same Table object - in this way the Table constructor acts as a registry function.

See also

Describing Databases with MetaData - Introduction to database metadata

Constructor arguments are as follows:

  • Parameters

    • name

      The name of this table as represented in the database.

      The table name, along with the value of the schema parameter, forms a key which uniquely identifies this Table within the owning MetaData collection. Additional calls to Table with the same name, metadata, and schema name will return the same Table object.

      Names which contain no upper case characters will be treated as case insensitive names, and will not be quoted unless they are a reserved word or contain special characters. A name with any number of upper case characters is considered to be case sensitive, and will be sent as quoted.

      To enable unconditional quoting for the table name, specify the flag quote=True to the constructor, or use the quoted_name construct to specify the name.

    • metadata – a MetaData object which will contain this table. The metadata is used as a point of association of this table with other tables which are referenced via foreign key. It also may be used to associate this table with a particular Connectable.

    • *args – Additional positional arguments are used primarily to add the list of Column objects contained within this table. Similar to the style of a CREATE TABLE statement, other SchemaItem constructs may be added here, including PrimaryKeyConstraint, and ForeignKeyConstraint.

    • autoload

      Defaults to False, unless Table.autoload_with is set in which case it defaults to True; Column objects for this table should be reflected from the database, possibly augmenting objects that were explicitly specified. Column and other objects explicitly set on the table will replace corresponding reflected objects.

      Deprecated since version 1.4: The autoload parameter is deprecated and will be removed in version 2.0. Please use the Table.autoload_with parameter, passing an engine or connection.

      See also

      Reflecting Database Objects

    • autoload_replace

      Defaults to True; when using Table.autoload in conjunction with Table.extend_existing, indicates that Column objects present in the already-existing Table object should be replaced with columns of the same name retrieved from the autoload process. When False, columns already present under existing names will be omitted from the reflection process.

      Note that this setting does not impact Column objects specified programmatically within the call to Table that also is autoloading; those Column objects will always replace existing columns of the same name when Table.extend_existing is True.

      See also

      Table.autoload

      Table.extend_existing

    • autoload_with – An Engine or Connection object, or a Inspector object as returned by inspect() against one, with which this Table object will be reflected. When set to a non-None value, the autoload process will take place for this table against the given engine or connection.

    • extend_existing

      When True, indicates that if this Table is already present in the given MetaData, apply further arguments within the constructor to the existing Table.

      If Table.extend_existing or Table.keep_existing are not set, and the given name of the new Table refers to a Table that is already present in the target MetaData collection, and this Table specifies additional columns or other constructs or flags that modify the table’s state, an error is raised. The purpose of these two mutually-exclusive flags is to specify what action should be taken when a Table is specified that matches an existing Table, yet specifies additional constructs.

      Table.extend_existing will also work in conjunction with Table.autoload to run a new reflection operation against the database, even if a Table of the same name is already present in the target MetaData; newly reflected Column objects and other options will be added into the state of the Table, potentially overwriting existing columns and options of the same name.

      As is always the case with Table.autoload, Column objects can be specified in the same Table constructor, which will take precedence. Below, the existing table mytable will be augmented with Column objects both reflected from the database, as well as the given Column named “y”:

      1. Table("mytable", metadata,
      2. Column('y', Integer),
      3. extend_existing=True,
      4. autoload_with=engine
      5. )

      See also

      Table.autoload

      Table.autoload_replace

      Table.keep_existing

    • implicit_returning – True by default - indicates that RETURNING can be used by default to fetch newly inserted primary key values, for backends which support this. Note that create_engine() also provides an implicit_returning flag.

    • include_columns – A list of strings indicating a subset of columns to be loaded via the autoload operation; table columns who aren’t present in this list will not be represented on the resulting Table object. Defaults to None which indicates all columns should be reflected.

    • resolve_fks

      Whether or not to reflect Table objects related to this one via ForeignKey objects, when Table.autoload or Table.autoload_with is specified. Defaults to True. Set to False to disable reflection of related tables as ForeignKey objects are encountered; may be used either to save on SQL calls or to avoid issues with related tables that can’t be accessed. Note that if a related table is already present in the MetaData collection, or becomes present later, a ForeignKey object associated with this Table will resolve to that table normally.

      New in version 1.3.

      See also

      MetaData.reflect.resolve_fks

    • info – Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

    • keep_existing

      When True, indicates that if this Table is already present in the given MetaData, ignore further arguments within the constructor to the existing Table, and return the Table object as originally created. This is to allow a function that wishes to define a new Table on first call, but on subsequent calls will return the same Table, without any of the declarations (particularly constraints) being applied a second time.

      If Table.extend_existing or Table.keep_existing are not set, and the given name of the new Table refers to a Table that is already present in the target MetaData collection, and this Table specifies additional columns or other constructs or flags that modify the table’s state, an error is raised. The purpose of these two mutually-exclusive flags is to specify what action should be taken when a Table is specified that matches an existing Table, yet specifies additional constructs.

      See also

      Table.extend_existing

    • listeners

      A list of tuples of the form (<eventname>, <fn>) which will be passed to listen() upon construction. This alternate hook to listen() allows the establishment of a listener function specific to this Table before the “autoload” process begins. Historically this has been intended for use with the DDLEvents.column_reflect() event, however note that this event hook may now be associated with the MetaData object directly:

      1. def listen_for_reflect(table, column_info):
      2. "handle the column reflection event"
      3. # ...
      4. t = Table(
      5. 'sometable',
      6. autoload_with=engine,
      7. listeners=[
      8. ('column_reflect', listen_for_reflect)
      9. ])

      See also

      DDLEvents.column_reflect()

    • must_exist – When True, indicates that this Table must already be present in the given MetaData collection, else an exception is raised.

    • prefixes – A list of strings to insert after CREATE in the CREATE TABLE statement. They will be separated by spaces.

    • quote – Force quoting of this table’s name on or off, corresponding to True or False. When left at its default of None, the column identifier will be quoted according to whether the name is case sensitive (identifiers with at least one upper case character are treated as case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect.

    • quote_schema – same as ‘quote’ but applies to the schema identifier.

    • schema

      The schema name for this table, which is required if the table resides in a schema other than the default selected schema for the engine’s database connection. Defaults to None.

      If the owning MetaData of this Table specifies its own MetaData.schema parameter, then that schema name will be applied to this Table if the schema parameter here is set to None. To set a blank schema name on a Table that would otherwise use the schema set on the owning MetaData, specify the special symbol BLANK_SCHEMA.

      New in version 1.0.14: Added the BLANK_SCHEMA symbol to allow a Table to have a blank schema name even when the parent MetaData specifies MetaData.schema.

      The quoting rules for the schema name are the same as those for the name parameter, in that quoting is applied for reserved words or case-sensitive names; to enable unconditional quoting for the schema name, specify the flag quote_schema=True to the constructor, or use the quoted_name construct to specify the name.

    • comment

      Optional string that will render an SQL comment on table creation.

      New in version 1.2: Added the Table.comment parameter to Table.

    • **kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

Class signature

class sqlalchemy.schema.Table (sqlalchemy.sql.base.DialectKWArgs, sqlalchemy.schema.SchemaItem, sqlalchemy.sql.expression.TableClause)

  • method sqlalchemy.schema.Table.__init__(\args, **kw*)

    Constructor for Table.

    This method is a no-op. See the top-level documentation for Table for constructor arguments.

  • method sqlalchemy.schema.Table.add_is_dependent_on(table)

    Add a ‘dependency’ for this Table.

    This is another Table object which must be created first before this one can, or dropped after this one.

    Usually, dependencies between tables are determined via ForeignKey objects. However, for other situations that create dependencies outside of foreign keys (rules, inheriting), this method can manually establish such a link.

  • method sqlalchemy.schema.Table.alias(name=None, flat=False)

    inherited from the FromClause.alias() method of FromClause

    Return an alias of this FromClause.

    E.g.:

    1. a2 = some_table.alias('a2')

    The above code creates an Alias object which can be used as a FROM clause in any SELECT statement.

    See also

    Using Aliases and Subqueries

    alias()

  • method sqlalchemy.schema.Table.append_column(column, replace_existing=False)

    Append a Column to this Table.

    The “key” of the newly added Column, i.e. the value of its .key attribute, will then be available in the .c collection of this Table, and the column definition will be included in any CREATE TABLE, SELECT, UPDATE, etc. statements generated from this Table construct.

    Note that this does not change the definition of the table as it exists within any underlying database, assuming that table has already been created in the database. Relational databases support the addition of columns to existing tables using the SQL ALTER command, which would need to be emitted for an already-existing table that doesn’t contain the newly added column.

    • Parameters

      replace_existing

      When True, allows replacing existing columns. When False, the default, an warning will be raised if a column with the same .key already exists. A future version of sqlalchemy will instead rise a warning.

      New in version 1.4.0.

  • method sqlalchemy.schema.Table.append_constraint(constraint)

    Append a Constraint to this Table.

    This has the effect of the constraint being included in any future CREATE TABLE statement, assuming specific DDL creation events have not been associated with the given Constraint object.

    Note that this does not produce the constraint within the relational database automatically, for a table that already exists in the database. To add a constraint to an existing relational database table, the SQL ALTER command must be used. SQLAlchemy also provides the AddConstraint construct which can produce this SQL when invoked as an executable clause.

  • method sqlalchemy.schema.Table.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.
  • attribute sqlalchemy.schema.Table.bind

    Return the connectable associated with this Table.

  • attribute sqlalchemy.schema.Table.c

    inherited from the FromClause.c attribute of FromClause

    A named-based collection of ColumnElement objects maintained by this FromClause.

    The FromClause.c attribute is an alias for the FromClause.columns atttribute.

  • attribute sqlalchemy.schema.Table.columns

    inherited from the FromClause.columns attribute of FromClause

    A named-based collection of ColumnElement objects maintained by this FromClause.

    The columns, or c collection, is the gateway to the construction of SQL expressions using table-bound or other selectable-bound columns:

    1. select(mytable).where(mytable.c.somecolumn == 5)
  • method sqlalchemy.schema.Table.compare(other, \*kw*)

    inherited from the ClauseElement.compare() method of ClauseElement

    Compare this ClauseElement to the given ClauseElement.

    Subclasses should override the default behavior, which is a straight identity comparison.

    **kw are arguments consumed by subclass compare() methods and may be used to modify the criteria for comparison (see ColumnElement).

  • method sqlalchemy.schema.Table.compile(bind=None, dialect=None, \*kw*)

    inherited from the ClauseElement.compile() method of ClauseElement

    Compile this SQL expression.

    The return value is a Compiled object. Calling str() or unicode() on the returned value will yield a string representation of the result. The Compiled object also can return a dictionary of bind parameter names and values using the params accessor.

    • Parameters

      • bind – An Engine or Connection from which a Compiled will be acquired. This argument takes precedence over this ClauseElement’s bound engine, if any.

      • column_keys – Used for INSERT and UPDATE statements, a list of column names which should be present in the VALUES clause of the compiled statement. If None, all columns from the target table object are rendered.

      • dialect – A Dialect instance from which a Compiled will be acquired. This argument takes precedence over the bind argument as well as this ClauseElement ‘s bound engine, if any.

      • compile_kwargs

        optional dictionary of additional parameters that will be passed through to the compiler within all “visit” methods. This allows any custom flag to be passed through to a custom compilation construct, for example. It is also used for the case of passing the literal_binds flag through:

        1. from sqlalchemy.sql import table, column, select
        2. t = table('t', column('x'))
        3. s = select(t).where(t.c.x == 5)
        4. print(s.compile(compile_kwargs={"literal_binds": True}))

        New in version 0.9.0.

  1. See also
  2. [How do I render SQL expressions as strings, possibly with bound parameters inlined?]($23f306fd0cdd485d.md#faq-sql-expression-string)
  1. See also
  2. [`Selectable.exported_columns`]($fc2d211e9d1454ca.md#sqlalchemy.sql.expression.Selectable.exported_columns "sqlalchemy.sql.expression.Selectable.exported_columns") - the [`ColumnCollection`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnCollection "sqlalchemy.sql.expression.ColumnCollection") that is used for the operation.
  3. [`ColumnCollection.corresponding_column()`]($f62ce11674ae62ed.md#sqlalchemy.sql.expression.ColumnCollection.corresponding_column "sqlalchemy.sql.expression.ColumnCollection.corresponding_column") - implementation method.
  1. See also
  2. [`join()`]($fc2d211e9d1454ca.md#sqlalchemy.sql.expression.join "sqlalchemy.sql.expression.join") - standalone function
  3. [`Join`]($fc2d211e9d1454ca.md#sqlalchemy.sql.expression.Join "sqlalchemy.sql.expression.Join") - the type of object produced
  • attribute sqlalchemy.schema.Table.key

    Return the ‘key’ for this Table.

    This value is used as the dictionary key within the MetaData.tables collection. It is typically the same as that of Table.name for a table with no Table.schema set; otherwise it is typically of the form schemaname.tablename.

  • attribute sqlalchemy.schema.Table.kwargs

    inherited from the DialectKWArgs.kwargs attribute of DialectKWArgs

    A synonym for DialectKWArgs.dialect_kwargs.

  • method sqlalchemy.schema.Table.lateral(name=None)

    inherited from the Selectable.lateral() method of Selectable

    Return a LATERAL alias of this Selectable.

    The return value is the Lateral construct also provided by the top-level lateral() function.

    New in version 1.1.

    See also

    LATERAL correlation - overview of usage.

  • class memoized_attribute(fget, doc=None)

    A read-only @property that is only evaluated once.

  • method sqlalchemy.schema.Table.classmethod memoized_instancemethod(fn)

    inherited from the HasMemoized.memoized_instancemethod() method of HasMemoized

    Decorate a method memoize its return value.

  • method sqlalchemy.schema.Table.outerjoin(right, onclause=None, full=False)

    inherited from the FromClause.outerjoin() method of FromClause

    Return a Join from this FromClause to another FromClause, with the “isouter” flag set to True.

    E.g.:

    1. from sqlalchemy import outerjoin
    2. j = user_table.outerjoin(address_table,
    3. user_table.c.id == address_table.c.user_id)

    The above is equivalent to:

    1. j = user_table.join(
    2. address_table,
    3. user_table.c.id == address_table.c.user_id,
    4. isouter=True)
    • Parameters

      • right – the right side of the join; this is any FromClause object such as a Table object, and may also be a selectable-compatible object such as an ORM-mapped class.

      • onclause – a SQL expression representing the ON clause of the join. If left at None, FromClause.join() will attempt to join the two tables based on a foreign key relationship.

      • full

        if True, render a FULL OUTER JOIN, instead of LEFT OUTER JOIN.

        New in version 1.1.

  1. See also
  2. [`FromClause.join()`]($fc2d211e9d1454ca.md#sqlalchemy.sql.expression.FromClause.join "sqlalchemy.sql.expression.FromClause.join")
  3. [`Join`]($fc2d211e9d1454ca.md#sqlalchemy.sql.expression.Join "sqlalchemy.sql.expression.Join")
  1. See also
  2. [`select()`]($fc2d211e9d1454ca.md#sqlalchemy.sql.expression.select "sqlalchemy.sql.expression.select") - general purpose method which allows for arbitrary column lists.
  • method sqlalchemy.schema.Table.self_group(against=None)

    inherited from the ClauseElement.self_group() method of ClauseElement

    Apply a ‘grouping’ to this ClauseElement.

    This method is overridden by subclasses to return a “grouping” construct, i.e. parenthesis. In particular it’s used by “binary” expressions to provide a grouping around themselves when placed into a larger expression, as well as by select() constructs when placed into the FROM clause of another select(). (Note that subqueries should be normally created using the Select.alias() method, as many platforms require nested SELECT statements to be named).

    As expressions are composed together, the application of self_group() is automatic - end-user code should never need to use this method directly. Note that SQLAlchemy’s clause constructs take operator precedence into account - so parenthesis might not be needed, for example, in an expression like x OR (y AND z) - AND takes precedence over OR.

    The base self_group() method of ClauseElement just returns self.

  • method sqlalchemy.schema.Table.table_valued()

    inherited from the FromClause.table_valued() method of FromClause

    Return a TableValuedColumn object for this FromClause.

    A TableValuedColumn is a ColumnElement that represents a complete row in a table. Support for this construct is backend dependent, and is supported in various forms by backends such as PostgreSQL, Oracle and SQL Server.

    E.g.:

    1. >>> from sqlalchemy import select, column, func, table
    2. >>> a = table("a", column("id"), column("x"), column("y"))
    3. >>> stmt = select(func.row_to_json(a.table_valued()))
    4. >>> print(stmt)
    5. SELECT row_to_json(a) AS row_to_json_1
    6. FROM a

    New in version 1.4.0b2.

    See also

    Working with SQL Functions - in the SQLAlchemy 1.4 / 2.0 Tutorial

  • method sqlalchemy.schema.Table.tablesample(sampling, name=None, seed=None)

    inherited from the FromClause.tablesample() method of FromClause

    Return a TABLESAMPLE alias of this FromClause.

    The return value is the TableSample construct also provided by the top-level tablesample() function.

    New in version 1.1.

    See also

    tablesample() - usage guidelines and parameters

  • method sqlalchemy.schema.Table.to_metadata(metadata, schema=symbol(‘retain_schema’), referred_schema_fn=None, name=None)

    Return a copy of this Table associated with a different MetaData.

    E.g.:

    1. m1 = MetaData()
    2. user = Table('user', m1, Column('id', Integer, primary_key=True))
    3. m2 = MetaData()
    4. user_copy = user.to_metadata(m2)

    Changed in version 1.4: The Table.to_metadata() function was renamed from Table.tometadata().

    • Parameters

      • metadata – Target MetaData object, into which the new Table object will be created.

      • schema

        optional string name indicating the target schema. Defaults to the special symbol RETAIN_SCHEMA which indicates that no change to the schema name should be made in the new Table. If set to a string name, the new Table will have this new name as the .schema. If set to None, the schema will be set to that of the schema set on the target MetaData, which is typically None as well, unless set explicitly:

        1. m2 = MetaData(schema='newschema')
        2. # user_copy_one will have "newschema" as the schema name
        3. user_copy_one = user.to_metadata(m2, schema=None)
        4. m3 = MetaData() # schema defaults to None
        5. # user_copy_two will have None as the schema name
        6. user_copy_two = user.to_metadata(m3, schema=None)
      • referred_schema_fn

        optional callable which can be supplied in order to provide for the schema name that should be assigned to the referenced table of a ForeignKeyConstraint. The callable accepts this parent Table, the target schema that we are changing to, the ForeignKeyConstraint object, and the existing “target schema” of that constraint. The function should return the string schema name that should be applied. E.g.:

        1. def referred_schema_fn(table, to_schema,
        2. constraint, referred_schema):
        3. if referred_schema == 'base_tables':
        4. return referred_schema
        5. else:
        6. return to_schema
        7. new_table = table.to_metadata(m2, schema="alt_schema",
        8. referred_schema_fn=referred_schema_fn)

        New in version 0.9.2.

      • name

        optional string name indicating the target table name. If not specified or None, the table name is retained. This allows a Table to be copied to the same MetaData target with a new name.

        New in version 1.0.0.

class sqlalchemy.schema.``ThreadLocalMetaData

A MetaData variant that presents a different bind in every thread.

Deprecated since version 1.4: ThreadLocalMetaData is deprecated and will be removed in a future release.

Makes the bind property of the MetaData a thread-local value, allowing this collection of tables to be bound to different Engine implementations or connections in each thread.

The ThreadLocalMetaData starts off bound to None in each thread. Binds must be made explicitly by assigning to the bind property or using connect(). You can also re-bind dynamically multiple times per thread, just like a regular MetaData.

Class signature

class sqlalchemy.schema.ThreadLocalMetaData (sqlalchemy.schema.MetaData)