Defining Constraints and Indexes

This section will discuss SQL constraints and indexes. In SQLAlchemy the key classes include ForeignKeyConstraint and Index.

Defining Foreign Keys

A foreign key in SQL is a table-level construct that constrains one or more columns in that table to only allow values that are present in a different set of columns, typically but not always located on a different table. We call the columns which are constrained the foreign key columns and the columns which they are constrained towards the referenced columns. The referenced columns almost always define the primary key for their owning table, though there are exceptions to this. The foreign key is the “joint” that connects together pairs of rows which have a relationship with each other, and SQLAlchemy assigns very deep importance to this concept in virtually every area of its operation.

In SQLAlchemy as well as in DDL, foreign key constraints can be defined as additional attributes within the table clause, or for single-column foreign keys they may optionally be specified within the definition of a single column. The single column foreign key is more common, and at the column level is specified by constructing a ForeignKey object as an argument to a Column object:

  1. user_preference = Table('user_preference', metadata,
  2. Column('pref_id', Integer, primary_key=True),
  3. Column('user_id', Integer, ForeignKey("user.user_id"), nullable=False),
  4. Column('pref_name', String(40), nullable=False),
  5. Column('pref_value', String(100))
  6. )

Above, we define a new table user_preference for which each row must contain a value in the user_id column that also exists in the user table’s user_id column.

The argument to ForeignKey is most commonly a string of the form <tablename>.<columnname>, or for a table in a remote schema or “owner” of the form <schemaname>.<tablename>.<columnname>. It may also be an actual Column object, which as we’ll see later is accessed from an existing Table object via its c collection:

  1. ForeignKey(user.c.user_id)

The advantage to using a string is that the in-python linkage between user and user_preference is resolved only when first needed, so that table objects can be easily spread across multiple modules and defined in any order.

Foreign keys may also be defined at the table level, using the ForeignKeyConstraint object. This object can describe a single- or multi-column foreign key. A multi-column foreign key is known as a composite foreign key, and almost always references a table that has a composite primary key. Below we define a table invoice which has a composite primary key:

  1. invoice = Table('invoice', metadata,
  2. Column('invoice_id', Integer, primary_key=True),
  3. Column('ref_num', Integer, primary_key=True),
  4. Column('description', String(60), nullable=False)
  5. )

And then a table invoice_item with a composite foreign key referencing invoice:

  1. invoice_item = Table('invoice_item', metadata,
  2. Column('item_id', Integer, primary_key=True),
  3. Column('item_name', String(60), nullable=False),
  4. Column('invoice_id', Integer, nullable=False),
  5. Column('ref_num', Integer, nullable=False),
  6. ForeignKeyConstraint(['invoice_id', 'ref_num'], ['invoice.invoice_id', 'invoice.ref_num'])
  7. )

It’s important to note that the ForeignKeyConstraint is the only way to define a composite foreign key. While we could also have placed individual ForeignKey objects on both the invoice_item.invoice_id and invoice_item.ref_num columns, SQLAlchemy would not be aware that these two values should be paired together - it would be two individual foreign key constraints instead of a single composite foreign key referencing two columns.

Creating/Dropping Foreign Key Constraints via ALTER

The behavior we’ve seen in tutorials and elsewhere involving foreign keys with DDL illustrates that the constraints are typically rendered “inline” within the CREATE TABLE statement, such as:

  1. CREATE TABLE addresses (
  2. id INTEGER NOT NULL,
  3. user_id INTEGER,
  4. email_address VARCHAR NOT NULL,
  5. PRIMARY KEY (id),
  6. CONSTRAINT user_id_fk FOREIGN KEY(user_id) REFERENCES users (id)
  7. )

The CONSTRAINT .. FOREIGN KEY directive is used to create the constraint in an “inline” fashion within the CREATE TABLE definition. The MetaData.create_all() and MetaData.drop_all() methods do this by default, using a topological sort of all the Table objects involved such that tables are created and dropped in order of their foreign key dependency (this sort is also available via the MetaData.sorted_tables accessor).

This approach can’t work when two or more foreign key constraints are involved in a “dependency cycle”, where a set of tables are mutually dependent on each other, assuming the backend enforces foreign keys (always the case except on SQLite, MySQL/MyISAM). The methods will therefore break out constraints in such a cycle into separate ALTER statements, on all backends other than SQLite which does not support most forms of ALTER. Given a schema like:

  1. node = Table(
  2. 'node', metadata,
  3. Column('node_id', Integer, primary_key=True),
  4. Column(
  5. 'primary_element', Integer,
  6. ForeignKey('element.element_id')
  7. )
  8. )
  9. element = Table(
  10. 'element', metadata,
  11. Column('element_id', Integer, primary_key=True),
  12. Column('parent_node_id', Integer),
  13. ForeignKeyConstraint(
  14. ['parent_node_id'], ['node.node_id'],
  15. name='fk_element_parent_node_id'
  16. )
  17. )

When we call upon MetaData.create_all() on a backend such as the PostgreSQL backend, the cycle between these two tables is resolved and the constraints are created separately:

  1. >>> with engine.connect() as conn:
  2. ... metadata.create_all(conn, checkfirst=False)
  3. CREATE TABLE element (
  4. element_id SERIAL NOT NULL,
  5. parent_node_id INTEGER,
  6. PRIMARY KEY (element_id)
  7. )
  8. CREATE TABLE node (
  9. node_id SERIAL NOT NULL,
  10. primary_element INTEGER,
  11. PRIMARY KEY (node_id)
  12. )
  13. ALTER TABLE element ADD CONSTRAINT fk_element_parent_node_id
  14. FOREIGN KEY(parent_node_id) REFERENCES node (node_id)
  15. ALTER TABLE node ADD FOREIGN KEY(primary_element)
  16. REFERENCES element (element_id)

In order to emit DROP for these tables, the same logic applies, however note here that in SQL, to emit DROP CONSTRAINT requires that the constraint has a name. In the case of the 'node' table above, we haven’t named this constraint; the system will therefore attempt to emit DROP for only those constraints that are named:

  1. >>> with engine.connect() as conn:
  2. ... metadata.drop_all(conn, checkfirst=False)
  3. ALTER TABLE element DROP CONSTRAINT fk_element_parent_node_id
  4. DROP TABLE node
  5. DROP TABLE element

In the case where the cycle cannot be resolved, such as if we hadn’t applied a name to either constraint here, we will receive the following error:

  1. sqlalchemy.exc.CircularDependencyError: Can't sort tables for DROP;
  2. an unresolvable foreign key dependency exists between tables:
  3. element, node. Please ensure that the ForeignKey and ForeignKeyConstraint
  4. objects involved in the cycle have names so that they can be dropped
  5. using DROP CONSTRAINT.

This error only applies to the DROP case as we can emit “ADD CONSTRAINT” in the CREATE case without a name; the database typically assigns one automatically.

The ForeignKeyConstraint.use_alter and ForeignKey.use_alter keyword arguments can be used to manually resolve dependency cycles. We can add this flag only to the 'element' table as follows:

  1. element = Table(
  2. 'element', metadata,
  3. Column('element_id', Integer, primary_key=True),
  4. Column('parent_node_id', Integer),
  5. ForeignKeyConstraint(
  6. ['parent_node_id'], ['node.node_id'],
  7. use_alter=True, name='fk_element_parent_node_id'
  8. )
  9. )

in our CREATE DDL we will see the ALTER statement only for this constraint, and not the other one:

  1. >>> with engine.connect() as conn:
  2. ... metadata.create_all(conn, checkfirst=False)
  3. CREATE TABLE element (
  4. element_id SERIAL NOT NULL,
  5. parent_node_id INTEGER,
  6. PRIMARY KEY (element_id)
  7. )
  8. CREATE TABLE node (
  9. node_id SERIAL NOT NULL,
  10. primary_element INTEGER,
  11. PRIMARY KEY (node_id),
  12. FOREIGN KEY(primary_element) REFERENCES element (element_id)
  13. )
  14. ALTER TABLE element ADD CONSTRAINT fk_element_parent_node_id
  15. FOREIGN KEY(parent_node_id) REFERENCES node (node_id)

ForeignKeyConstraint.use_alter and ForeignKey.use_alter, when used in conjunction with a drop operation, will require that the constraint is named, else an error like the following is generated:

  1. sqlalchemy.exc.CompileError: Can't emit DROP CONSTRAINT for constraint
  2. ForeignKeyConstraint(...); it has no name

Changed in version 1.0.0: - The DDL system invoked by MetaData.create_all() and MetaData.drop_all() will now automatically resolve mutually dependent foreign keys between tables declared by ForeignKeyConstraint and ForeignKey objects, without the need to explicitly set the ForeignKeyConstraint.use_alter flag.

Changed in version 1.0.0: - The ForeignKeyConstraint.use_alter flag can be used with an un-named constraint; only the DROP operation will emit a specific error when actually called upon.

See also

Configuring Constraint Naming Conventions

sort_tables_and_constraints()

ON UPDATE and ON DELETE

Most databases support cascading of foreign key values, that is the when a parent row is updated the new value is placed in child rows, or when the parent row is deleted all corresponding child rows are set to null or deleted. In data definition language these are specified using phrases like “ON UPDATE CASCADE”, “ON DELETE CASCADE”, and “ON DELETE SET NULL”, corresponding to foreign key constraints. The phrase after “ON UPDATE” or “ON DELETE” may also other allow other phrases that are specific to the database in use. The ForeignKey and ForeignKeyConstraint objects support the generation of this clause via the onupdate and ondelete keyword arguments. The value is any string which will be output after the appropriate “ON UPDATE” or “ON DELETE” phrase:

  1. child = Table('child', meta,
  2. Column('id', Integer,
  3. ForeignKey('parent.id', onupdate="CASCADE", ondelete="CASCADE"),
  4. primary_key=True
  5. )
  6. )
  7. composite = Table('composite', meta,
  8. Column('id', Integer, primary_key=True),
  9. Column('rev_id', Integer),
  10. Column('note_id', Integer),
  11. ForeignKeyConstraint(
  12. ['rev_id', 'note_id'],
  13. ['revisions.id', 'revisions.note_id'],
  14. onupdate="CASCADE", ondelete="SET NULL"
  15. )
  16. )

Note that these clauses require InnoDB tables when used with MySQL. They may also not be supported on other databases.

See also

For background on integration of ON DELETE CASCADE with ORM relationship() constructs, see the following sections:

Using foreign key ON DELETE cascade with ORM relationships

Using foreign key ON DELETE with many-to-many relationships

UNIQUE Constraint

Unique constraints can be created anonymously on a single column using the unique keyword on Column. Explicitly named unique constraints and/or those with multiple columns are created via the UniqueConstraint table-level construct.

  1. from sqlalchemy import UniqueConstraint
  2. meta = MetaData()
  3. mytable = Table('mytable', meta,
  4. # per-column anonymous unique constraint
  5. Column('col1', Integer, unique=True),
  6. Column('col2', Integer),
  7. Column('col3', Integer),
  8. # explicit/composite unique constraint. 'name' is optional.
  9. UniqueConstraint('col2', 'col3', name='uix_1')
  10. )

CHECK Constraint

Check constraints can be named or unnamed and can be created at the Column or Table level, using the CheckConstraint construct. The text of the check constraint is passed directly through to the database, so there is limited “database independent” behavior. Column level check constraints generally should only refer to the column to which they are placed, while table level constraints can refer to any columns in the table.

Note that some databases do not actively support check constraints such as MySQL.

  1. from sqlalchemy import CheckConstraint
  2. meta = MetaData()
  3. mytable = Table('mytable', meta,
  4. # per-column CHECK constraint
  5. Column('col1', Integer, CheckConstraint('col1>5')),
  6. Column('col2', Integer),
  7. Column('col3', Integer),
  8. # table level CHECK constraint. 'name' is optional.
  9. CheckConstraint('col2 > col3 + 5', name='check1')
  10. )
  11. sqlmytable.create(engine)
  12. CREATE TABLE mytable (
  13. col1 INTEGER CHECK (col1>5),
  14. col2 INTEGER,
  15. col3 INTEGER,
  16. CONSTRAINT check1 CHECK (col2 > col3 + 5)
  17. )

PRIMARY KEY Constraint

The primary key constraint of any Table object is implicitly present, based on the Column objects that are marked with the Column.primary_key flag. The PrimaryKeyConstraint object provides explicit access to this constraint, which includes the option of being configured directly:

  1. from sqlalchemy import PrimaryKeyConstraint
  2. my_table = Table('mytable', metadata,
  3. Column('id', Integer),
  4. Column('version_id', Integer),
  5. Column('data', String(50)),
  6. PrimaryKeyConstraint('id', 'version_id', name='mytable_pk')
  7. )

See also

PrimaryKeyConstraint - detailed API documentation.

Setting up Constraints when using the Declarative ORM Extension

The Table is the SQLAlchemy Core construct that allows one to define table metadata, which among other things can be used by the SQLAlchemy ORM as a target to map a class. The Declarative extension allows the Table object to be created automatically, given the contents of the table primarily as a mapping of Column objects.

To apply table-level constraint objects such as ForeignKeyConstraint to a table defined using Declarative, use the __table_args__ attribute, described at Table Configuration.

Configuring Constraint Naming Conventions

Relational databases typically assign explicit names to all constraints and indexes. In the common case that a table is created using CREATE TABLE where constraints such as CHECK, UNIQUE, and PRIMARY KEY constraints are produced inline with the table definition, the database usually has a system in place in which names are automatically assigned to these constraints, if a name is not otherwise specified. When an existing database table is altered in a database using a command such as ALTER TABLE, this command typically needs to specify explicit names for new constraints as well as be able to specify the name of an existing constraint that is to be dropped or modified.

Constraints can be named explicitly using the Constraint.name parameter, and for indexes the Index.name parameter. However, in the case of constraints this parameter is optional. There are also the use cases of using the Column.unique and Column.index parameters which create UniqueConstraint and Index objects without an explicit name being specified.

The use case of alteration of existing tables and constraints can be handled by schema migration tools such as Alembic. However, neither Alembic nor SQLAlchemy currently create names for constraint objects where the name is otherwise unspecified, leading to the case where being able to alter existing constraints means that one must reverse-engineer the naming system used by the relational database to auto-assign names, or that care must be taken to ensure that all constraints are named.

In contrast to having to assign explicit names to all Constraint and Index objects, automated naming schemes can be constructed using events. This approach has the advantage that constraints will get a consistent naming scheme without the need for explicit name parameters throughout the code, and also that the convention takes place just as well for those constraints and indexes produced by the Column.unique and Column.index parameters. As of SQLAlchemy 0.9.2 this event-based approach is included, and can be configured using the argument MetaData.naming_convention.

Configuring a Naming Convention for a MetaData Collection

MetaData.naming_convention refers to a dictionary which accepts the Index class or individual Constraint classes as keys, and Python string templates as values. It also accepts a series of string-codes as alternative keys, "fk", "pk", "ix", "ck", "uq" for foreign key, primary key, index, check, and unique constraint, respectively. The string templates in this dictionary are used whenever a constraint or index is associated with this MetaData object that does not have an existing name given (including one exception case where an existing name can be further embellished).

An example naming convention that suits basic cases is as follows:

  1. convention = {
  2. "ix": 'ix_%(column_0_label)s',
  3. "uq": "uq_%(table_name)s_%(column_0_name)s",
  4. "ck": "ck_%(table_name)s_%(constraint_name)s",
  5. "fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
  6. "pk": "pk_%(table_name)s"
  7. }
  8. metadata = MetaData(naming_convention=convention)

The above convention will establish names for all constraints within the target MetaData collection. For example, we can observe the name produced when we create an unnamed UniqueConstraint:

  1. >>> user_table = Table('user', metadata,
  2. ... Column('id', Integer, primary_key=True),
  3. ... Column('name', String(30), nullable=False),
  4. ... UniqueConstraint('name')
  5. ... )
  6. >>> list(user_table.constraints)[1].name
  7. 'uq_user_name'

This same feature takes effect even if we just use the Column.unique flag:

  1. >>> user_table = Table('user', metadata,
  2. ... Column('id', Integer, primary_key=True),
  3. ... Column('name', String(30), nullable=False, unique=True)
  4. ... )
  5. >>> list(user_table.constraints)[1].name
  6. 'uq_user_name'

A key advantage to the naming convention approach is that the names are established at Python construction time, rather than at DDL emit time. The effect this has when using Alembic’s --autogenerate feature is that the naming convention will be explicit when a new migration script is generated:

  1. def upgrade():
  2. op.create_unique_constraint("uq_user_name", "user", ["name"])

The above "uq_user_name" string was copied from the UniqueConstraint object that --autogenerate located in our metadata.

The tokens available include %(table_name)s, %(referred_table_name)s, %(column_0_name)s, %(column_0_label)s, %(column_0_key)s, %(referred_column_0_name)s, and %(constraint_name)s, as well as multiple-column versions of each including %(column_0N_name)s, %(column_0_N_name)s, %(referred_column_0_N_name)s which render all column names separated with or without an underscore. The documentation for MetaData.naming_convention has further detail on each of these conventions.

The Default Naming Convention

The default value for MetaData.naming_convention handles the long-standing SQLAlchemy behavior of assigning a name to a Index object that is created using the Column.index parameter:

  1. >>> from sqlalchemy.sql.schema import DEFAULT_NAMING_CONVENTION
  2. >>> DEFAULT_NAMING_CONVENTION
  3. immutabledict({'ix': 'ix_%(column_0_label)s'})

Truncation of Long Names

When a generated name, particularly those that use the multiple-column tokens, is too long for the identifier length limit of the target database (for example, PostgreSQL has a limit of 63 characters), the name will be deterministically truncated using a 4-character suffix based on the md5 hash of the long name. For example, the naming convention below will generate very long names given the column names in use:

  1. metadata = MetaData(naming_convention={
  2. "uq": "uq_%(table_name)s_%(column_0_N_name)s"
  3. })
  4. long_names = Table(
  5. 'long_names', metadata,
  6. Column('information_channel_code', Integer, key='a'),
  7. Column('billing_convention_name', Integer, key='b'),
  8. Column('product_identifier', Integer, key='c'),
  9. UniqueConstraint('a', 'b', 'c')
  10. )

On the PostgreSQL dialect, names longer than 63 characters will be truncated as in the following example:

  1. CREATE TABLE long_names (
  2. information_channel_code INTEGER,
  3. billing_convention_name INTEGER,
  4. product_identifier INTEGER,
  5. CONSTRAINT uq_long_names_information_channel_code_billing_conventi_a79e
  6. UNIQUE (information_channel_code, billing_convention_name, product_identifier)
  7. )

The above suffix a79e is based on the md5 hash of the long name and will generate the same value every time to produce consistent names for a given schema.

Creating Custom Tokens for Naming Conventions

New tokens can also be added, by specifying an additional token and a callable within the naming_convention dictionary. For example, if we wanted to name our foreign key constraints using a GUID scheme, we could do that as follows:

  1. import uuid
  2. def fk_guid(constraint, table):
  3. str_tokens = [
  4. table.name,
  5. ] + [
  6. element.parent.name for element in constraint.elements
  7. ] + [
  8. element.target_fullname for element in constraint.elements
  9. ]
  10. guid = uuid.uuid5(uuid.NAMESPACE_OID, "_".join(str_tokens).encode('ascii'))
  11. return str(guid)
  12. convention = {
  13. "fk_guid": fk_guid,
  14. "ix": 'ix_%(column_0_label)s',
  15. "fk": "fk_%(fk_guid)s",
  16. }

Above, when we create a new ForeignKeyConstraint, we will get a name as follows:

  1. >>> metadata = MetaData(naming_convention=convention)
  2. >>> user_table = Table('user', metadata,
  3. ... Column('id', Integer, primary_key=True),
  4. ... Column('version', Integer, primary_key=True),
  5. ... Column('data', String(30))
  6. ... )
  7. >>> address_table = Table('address', metadata,
  8. ... Column('id', Integer, primary_key=True),
  9. ... Column('user_id', Integer),
  10. ... Column('user_version_id', Integer)
  11. ... )
  12. >>> fk = ForeignKeyConstraint(['user_id', 'user_version_id'],
  13. ... ['user.id', 'user.version'])
  14. >>> address_table.append_constraint(fk)
  15. >>> fk.name
  16. fk_0cd51ab5-8d70-56e8-a83c-86661737766d

See also

MetaData.naming_convention - for additional usage details as well as a listing of all available naming components.

The Importance of Naming Constraints - in the Alembic documentation.

New in version 1.3.0: added multi-column naming tokens such as %(column_0_N_name)s. Generated names that go beyond the character limit for the target database will be deterministically truncated.

Naming CHECK Constraints

The CheckConstraint object is configured against an arbitrary SQL expression, which can have any number of columns present, and additionally is often configured using a raw SQL string. Therefore a common convention to use with CheckConstraint is one where we expect the object to have a name already, and we then enhance it with other convention elements. A typical convention is "ck_%(table_name)s_%(constraint_name)s":

  1. metadata = MetaData(
  2. naming_convention={"ck": "ck_%(table_name)s_%(constraint_name)s"}
  3. )
  4. Table('foo', metadata,
  5. Column('value', Integer),
  6. CheckConstraint('value > 5', name='value_gt_5')
  7. )

The above table will produce the name ck_foo_value_gt_5:

  1. CREATE TABLE foo (
  2. value INTEGER,
  3. CONSTRAINT ck_foo_value_gt_5 CHECK (value > 5)
  4. )

CheckConstraint also supports the %(columns_0_name)s token; we can make use of this by ensuring we use a Column or column() element within the constraint’s expression, either by declaring the constraint separate from the table:

  1. metadata = MetaData(
  2. naming_convention={"ck": "ck_%(table_name)s_%(column_0_name)s"}
  3. )
  4. foo = Table('foo', metadata,
  5. Column('value', Integer)
  6. )
  7. CheckConstraint(foo.c.value > 5)

or by using a column() inline:

  1. from sqlalchemy import column
  2. metadata = MetaData(
  3. naming_convention={"ck": "ck_%(table_name)s_%(column_0_name)s"}
  4. )
  5. foo = Table('foo', metadata,
  6. Column('value', Integer),
  7. CheckConstraint(column('value') > 5)
  8. )

Both will produce the name ck_foo_value:

  1. CREATE TABLE foo (
  2. value INTEGER,
  3. CONSTRAINT ck_foo_value CHECK (value > 5)
  4. )

The determination of the name of “column zero” is performed by scanning the given expression for column objects. If the expression has more than one column present, the scan does use a deterministic search, however the structure of the expression will determine which column is noted as “column zero”.

New in version 1.0.0: The CheckConstraint object now supports the column_0_name naming convention token.

Configuring Naming for Boolean, Enum, and other schema types

The SchemaType class refers to type objects such as Boolean and Enum which generate a CHECK constraint accompanying the type. The name for the constraint here is most directly set up by sending the “name” parameter, e.g. Boolean.name:

  1. Table('foo', metadata,
  2. Column('flag', Boolean(name='ck_foo_flag'))
  3. )

The naming convention feature may be combined with these types as well, normally by using a convention which includes %(constraint_name)s and then applying a name to the type:

  1. metadata = MetaData(
  2. naming_convention={"ck": "ck_%(table_name)s_%(constraint_name)s"}
  3. )
  4. Table('foo', metadata,
  5. Column('flag', Boolean(name='flag_bool'))
  6. )

The above table will produce the constraint name ck_foo_flag_bool:

  1. CREATE TABLE foo (
  2. flag BOOL,
  3. CONSTRAINT ck_foo_flag_bool CHECK (flag IN (0, 1))
  4. )

The SchemaType classes use special internal symbols so that the naming convention is only determined at DDL compile time. On PostgreSQL, there’s a native BOOLEAN type, so the CHECK constraint of Boolean is not needed; we are safe to set up a Boolean type without a name, even though a naming convention is in place for check constraints. This convention will only be consulted for the CHECK constraint if we run against a database without a native BOOLEAN type like SQLite or MySQL.

The CHECK constraint may also make use of the column_0_name token, which works nicely with SchemaType since these constraints have only one column:

  1. metadata = MetaData(
  2. naming_convention={"ck": "ck_%(table_name)s_%(column_0_name)s"}
  3. )
  4. Table('foo', metadata,
  5. Column('flag', Boolean())
  6. )

The above schema will produce:

  1. CREATE TABLE foo (
  2. flag BOOL,
  3. CONSTRAINT ck_foo_flag CHECK (flag IN (0, 1))
  4. )

Changed in version 1.0: Constraint naming conventions that don’t include %(constraint_name)s again work with SchemaType constraints.

Constraints API

Object NameDescription

CheckConstraint

A table- or column-level CHECK constraint.

ColumnCollectionConstraint

A constraint that proxies a ColumnCollection.

ColumnCollectionMixin

Constraint

A table-level SQL constraint.

conv

Mark a string indicating that a name has already been converted by a naming convention.

ForeignKey

Defines a dependency between two columns.

ForeignKeyConstraint

A table-level FOREIGN KEY constraint.

PrimaryKeyConstraint

A table-level PRIMARY KEY constraint.

UniqueConstraint

A table-level UNIQUE constraint.

class sqlalchemy.schema.``Constraint(name=None, deferrable=None, initially=None, _create_rule=None, info=None, _type_bound=False, \*dialect_kw*)

A table-level SQL constraint.

Constraint serves as the base class for the series of constraint objects that can be associated with Table objects, including PrimaryKeyConstraint, ForeignKeyConstraint UniqueConstraint, and CheckConstraint.

Class signature

class sqlalchemy.schema.Constraint (sqlalchemy.sql.base.DialectKWArgs, sqlalchemy.schema.SchemaItem)

  • method sqlalchemy.schema.Constraint.__init__(name=None, deferrable=None, initially=None, _create_rule=None, info=None, _type_bound=False, \*dialect_kw*)

    Create a SQL constraint.

    • Parameters

      • name – Optional, the in-database name of this Constraint.

      • deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

      • initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

      • info

        Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

        New in version 1.0.0.

      • **dialect_kw – Additional keyword arguments are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

      • _create_rule – used internally by some datatypes that also create constraints.

      • _type_bound – used internally to indicate that this constraint is associated with a specific datatype.

class sqlalchemy.schema.``ColumnCollectionMixin(\columns, **kw*)

class sqlalchemy.schema.``ColumnCollectionConstraint(\columns, **kw*)

A constraint that proxies a ColumnCollection.

Class signature

class sqlalchemy.schema.ColumnCollectionConstraint (sqlalchemy.schema.ColumnCollectionMixin, sqlalchemy.schema.Constraint)

  • method sqlalchemy.schema.ColumnCollectionConstraint.__init__(\columns, **kw*)

    • Parameters

      • *columns – A sequence of column names or Column objects.

      • name – Optional, the in-database name of this constraint.

      • deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

      • initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

      • **kw – other keyword arguments including dialect-specific arguments are propagated to the Constraint superclass.

  • method sqlalchemy.schema.ColumnCollectionConstraint.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.

class sqlalchemy.schema.``CheckConstraint(sqltext, name=None, deferrable=None, initially=None, table=None, info=None, _create_rule=None, _autoattach=True, _type_bound=False, \*kw*)

A table- or column-level CHECK constraint.

Can be included in the definition of a Table or Column.

Class signature

class sqlalchemy.schema.CheckConstraint (sqlalchemy.schema.ColumnCollectionConstraint)

  • method sqlalchemy.schema.CheckConstraint.__init__(sqltext, name=None, deferrable=None, initially=None, table=None, info=None, _create_rule=None, _autoattach=True, _type_bound=False, \*kw*)

    Construct a CHECK constraint.

    • Parameters

      • sqltext

        A string containing the constraint definition, which will be used verbatim, or a SQL expression construct. If given as a string, the object is converted to a text() object. If the textual string includes a colon character, escape this using a backslash:

        1. CheckConstraint(r"foo ~ E'a(?\:b|c)d")

        Warning

        The CheckConstraint.sqltext argument to CheckConstraint can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given. DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER.

      • name – Optional, the in-database name of the constraint.

      • deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

      • initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

      • info

        Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

        New in version 1.0.0.

  • method sqlalchemy.schema.CheckConstraint.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.

class sqlalchemy.schema.``ForeignKey(column, _constraint=None, use_alter=False, name=None, onupdate=None, ondelete=None, deferrable=None, initially=None, link_to_name=False, match=None, info=None, \*dialect_kw*)

Defines a dependency between two columns.

ForeignKey is specified as an argument to a Column object, e.g.:

  1. t = Table("remote_table", metadata,
  2. Column("remote_id", ForeignKey("main_table.id"))
  3. )

Note that ForeignKey is only a marker object that defines a dependency between two columns. The actual constraint is in all cases represented by the ForeignKeyConstraint object. This object will be generated automatically when a ForeignKey is associated with a Column which in turn is associated with a Table. Conversely, when ForeignKeyConstraint is applied to a Table, ForeignKey markers are automatically generated to be present on each associated Column, which are also associated with the constraint object.

Note that you cannot define a “composite” foreign key constraint, that is a constraint between a grouping of multiple parent/child columns, using ForeignKey objects. To define this grouping, the ForeignKeyConstraint object must be used, and applied to the Table. The associated ForeignKey objects are created automatically.

The ForeignKey objects associated with an individual Column object are available in the foreign_keys collection of that column.

Further examples of foreign key configuration are in Defining Foreign Keys.

Class signature

class sqlalchemy.schema.ForeignKey (sqlalchemy.sql.base.DialectKWArgs, sqlalchemy.schema.SchemaItem)

  • method sqlalchemy.schema.ForeignKey.__init__(column, _constraint=None, use_alter=False, name=None, onupdate=None, ondelete=None, deferrable=None, initially=None, link_to_name=False, match=None, info=None, \*dialect_kw*)

    Construct a column-level FOREIGN KEY.

    The ForeignKey object when constructed generates a ForeignKeyConstraint which is associated with the parent Table object’s collection of constraints.

    • Parameters

      • column – A single target column for the key relationship. A Column object or a column name as a string: tablename.columnkey or schema.tablename.columnkey. columnkey is the key which has been assigned to the column (defaults to the column name itself), unless link_to_name is True in which case the rendered name of the column is used.

      • name – Optional string. An in-database name for the key if constraint is not provided.

      • onupdate – Optional string. If set, emit ON UPDATE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.

      • ondelete – Optional string. If set, emit ON DELETE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.

      • deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

      • initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

      • link_to_name – if True, the string name given in column is the rendered name of the referenced column, not its locally assigned key.

      • use_alter

        passed to the underlying ForeignKeyConstraint to indicate the constraint should be generated/dropped externally from the CREATE TABLE/ DROP TABLE statement. See ForeignKeyConstraint.use_alter for further description.

        See also

        ForeignKeyConstraint.use_alter

        Creating/Dropping Foreign Key Constraints via ALTER

      • match – Optional string. If set, emit MATCH <value> when issuing DDL for this constraint. Typical values include SIMPLE, PARTIAL and FULL.

      • info

        Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

        New in version 1.0.0.

      • **dialect_kw

        Additional keyword arguments are dialect specific, and passed in the form <dialectname>_<argname>. The arguments are ultimately handled by a corresponding ForeignKeyConstraint. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

        New in version 0.9.2.

  • method sqlalchemy.schema.ForeignKey.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.

class sqlalchemy.schema.``ForeignKeyConstraint(columns, refcolumns, name=None, onupdate=None, ondelete=None, deferrable=None, initially=None, use_alter=False, link_to_name=False, match=None, table=None, info=None, \*dialect_kw*)

A table-level FOREIGN KEY constraint.

Defines a single column or composite FOREIGN KEY … REFERENCES constraint. For a no-frills, single column foreign key, adding a ForeignKey to the definition of a Column is a shorthand equivalent for an unnamed, single column ForeignKeyConstraint.

Examples of foreign key configuration are in Defining Foreign Keys.

Class signature

class sqlalchemy.schema.ForeignKeyConstraint (sqlalchemy.schema.ColumnCollectionConstraint)

  • method sqlalchemy.schema.ForeignKeyConstraint.__init__(columns, refcolumns, name=None, onupdate=None, ondelete=None, deferrable=None, initially=None, use_alter=False, link_to_name=False, match=None, table=None, info=None, \*dialect_kw*)

    Construct a composite-capable FOREIGN KEY.

    • Parameters

      • columns – A sequence of local column names. The named columns must be defined and present in the parent Table. The names should match the key given to each column (defaults to the name) unless link_to_name is True.

      • refcolumns – A sequence of foreign column names or Column objects. The columns must all be located within the same Table.

      • name – Optional, the in-database name of the key.

      • onupdate – Optional string. If set, emit ON UPDATE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.

      • ondelete – Optional string. If set, emit ON DELETE <value> when issuing DDL for this constraint. Typical values include CASCADE, DELETE and RESTRICT.

      • deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

      • initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

      • link_to_name – if True, the string name given in column is the rendered name of the referenced column, not its locally assigned key.

      • use_alter

        If True, do not emit the DDL for this constraint as part of the CREATE TABLE definition. Instead, generate it via an ALTER TABLE statement issued after the full collection of tables have been created, and drop it via an ALTER TABLE statement before the full collection of tables are dropped.

        The use of ForeignKeyConstraint.use_alter is particularly geared towards the case where two or more tables are established within a mutually-dependent foreign key constraint relationship; however, the MetaData.create_all() and MetaData.drop_all() methods will perform this resolution automatically, so the flag is normally not needed.

        Changed in version 1.0.0: Automatic resolution of foreign key cycles has been added, removing the need to use the ForeignKeyConstraint.use_alter in typical use cases.

        See also

        Creating/Dropping Foreign Key Constraints via ALTER

      • match – Optional string. If set, emit MATCH <value> when issuing DDL for this constraint. Typical values include SIMPLE, PARTIAL and FULL.

      • info

        Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

        New in version 1.0.0.

      • **dialect_kw

        Additional keyword arguments are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

        New in version 0.9.2.

  • method sqlalchemy.schema.ForeignKeyConstraint.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.

class sqlalchemy.schema.``PrimaryKeyConstraint(\columns, **kw*)

A table-level PRIMARY KEY constraint.

The PrimaryKeyConstraint object is present automatically on any Table object; it is assigned a set of Column objects corresponding to those marked with the Column.primary_key flag:

  1. >>> my_table = Table('mytable', metadata,
  2. ... Column('id', Integer, primary_key=True),
  3. ... Column('version_id', Integer, primary_key=True),
  4. ... Column('data', String(50))
  5. ... )
  6. >>> my_table.primary_key
  7. PrimaryKeyConstraint(
  8. Column('id', Integer(), table=<mytable>,
  9. primary_key=True, nullable=False),
  10. Column('version_id', Integer(), table=<mytable>,
  11. primary_key=True, nullable=False)
  12. )

The primary key of a Table can also be specified by using a PrimaryKeyConstraint object explicitly; in this mode of usage, the “name” of the constraint can also be specified, as well as other options which may be recognized by dialects:

  1. my_table = Table('mytable', metadata,
  2. Column('id', Integer),
  3. Column('version_id', Integer),
  4. Column('data', String(50)),
  5. PrimaryKeyConstraint('id', 'version_id',
  6. name='mytable_pk')
  7. )

The two styles of column-specification should generally not be mixed. An warning is emitted if the columns present in the PrimaryKeyConstraint don’t match the columns that were marked as primary_key=True, if both are present; in this case, the columns are taken strictly from the PrimaryKeyConstraint declaration, and those columns otherwise marked as primary_key=True are ignored. This behavior is intended to be backwards compatible with previous behavior.

Changed in version 0.9.2: Using a mixture of columns within a PrimaryKeyConstraint in addition to columns marked as primary_key=True now emits a warning if the lists don’t match. The ultimate behavior of ignoring those columns marked with the flag only is currently maintained for backwards compatibility; this warning may raise an exception in a future release.

For the use case where specific options are to be specified on the PrimaryKeyConstraint, but the usual style of using primary_key=True flags is still desirable, an empty PrimaryKeyConstraint may be specified, which will take on the primary key column collection from the Table based on the flags:

  1. my_table = Table('mytable', metadata,
  2. Column('id', Integer, primary_key=True),
  3. Column('version_id', Integer, primary_key=True),
  4. Column('data', String(50)),
  5. PrimaryKeyConstraint(name='mytable_pk',
  6. mssql_clustered=True)
  7. )

New in version 0.9.2: an empty PrimaryKeyConstraint may now be specified for the purposes of establishing keyword arguments with the constraint, independently of the specification of “primary key” columns within the Table itself; columns marked as primary_key=True will be gathered into the empty constraint’s column collection.

Class signature

class sqlalchemy.schema.PrimaryKeyConstraint (sqlalchemy.schema.ColumnCollectionConstraint)

  • method sqlalchemy.schema.PrimaryKeyConstraint.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.

class sqlalchemy.schema.``UniqueConstraint(\columns, **kw*)

A table-level UNIQUE constraint.

Defines a single column or composite UNIQUE constraint. For a no-frills, single column constraint, adding unique=True to the Column definition is a shorthand equivalent for an unnamed, single column UniqueConstraint.

Class signature

class sqlalchemy.schema.UniqueConstraint (sqlalchemy.schema.ColumnCollectionConstraint)

  • method sqlalchemy.schema.UniqueConstraint.__init__(\columns, **kw*)

    inherited from the sqlalchemy.schema.ColumnCollectionConstraint.__init__ method of ColumnCollectionConstraint

    • Parameters

      • *columns – A sequence of column names or Column objects.

      • name – Optional, the in-database name of this constraint.

      • deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

      • initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

      • **kw – other keyword arguments including dialect-specific arguments are propagated to the Constraint superclass.

  • method sqlalchemy.schema.UniqueConstraint.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.

function sqlalchemy.schema.``conv(value, quote=None)

Mark a string indicating that a name has already been converted by a naming convention.

This is a string subclass that indicates a name that should not be subject to any further naming conventions.

E.g. when we create a Constraint using a naming convention as follows:

  1. m = MetaData(naming_convention={
  2. "ck": "ck_%(table_name)s_%(constraint_name)s"
  3. })
  4. t = Table('t', m, Column('x', Integer),
  5. CheckConstraint('x > 5', name='x5'))

The name of the above constraint will be rendered as "ck_t_x5". That is, the existing name x5 is used in the naming convention as the constraint_name token.

In some situations, such as in migration scripts, we may be rendering the above CheckConstraint with a name that’s already been converted. In order to make sure the name isn’t double-modified, the new name is applied using the conv() marker. We can use this explicitly as follows:

  1. m = MetaData(naming_convention={
  2. "ck": "ck_%(table_name)s_%(constraint_name)s"
  3. })
  4. t = Table('t', m, Column('x', Integer),
  5. CheckConstraint('x > 5', name=conv('ck_t_x5')))

Where above, the conv() marker indicates that the constraint name here is final, and the name will render as "ck_t_x5" and not "ck_t_ck_t_x5"

New in version 0.9.4.

See also

Configuring Constraint Naming Conventions

Indexes

Indexes can be created anonymously (using an auto-generated name ix_<column label>) for a single column using the inline index keyword on Column, which also modifies the usage of unique to apply the uniqueness to the index itself, instead of adding a separate UNIQUE constraint. For indexes with specific names or which encompass more than one column, use the Index construct, which requires a name.

Below we illustrate a Table with several Index objects associated. The DDL for “CREATE INDEX” is issued right after the create statements for the table:

  1. meta = MetaData()
  2. mytable = Table('mytable', meta,
  3. # an indexed column, with index "ix_mytable_col1"
  4. Column('col1', Integer, index=True),
  5. # a uniquely indexed column with index "ix_mytable_col2"
  6. Column('col2', Integer, index=True, unique=True),
  7. Column('col3', Integer),
  8. Column('col4', Integer),
  9. Column('col5', Integer),
  10. Column('col6', Integer),
  11. )
  12. # place an index on col3, col4
  13. Index('idx_col34', mytable.c.col3, mytable.c.col4)
  14. # place a unique index on col5, col6
  15. Index('myindex', mytable.c.col5, mytable.c.col6, unique=True)
  16. sqlmytable.create(engine)
  17. CREATE TABLE mytable (
  18. col1 INTEGER,
  19. col2 INTEGER,
  20. col3 INTEGER,
  21. col4 INTEGER,
  22. col5 INTEGER,
  23. col6 INTEGER
  24. )
  25. CREATE INDEX ix_mytable_col1 ON mytable (col1)
  26. CREATE UNIQUE INDEX ix_mytable_col2 ON mytable (col2)
  27. CREATE UNIQUE INDEX myindex ON mytable (col5, col6)
  28. CREATE INDEX idx_col34 ON mytable (col3, col4)

Note in the example above, the Index construct is created externally to the table which it corresponds, using Column objects directly. Index also supports “inline” definition inside the Table, using string names to identify columns:

  1. meta = MetaData()
  2. mytable = Table('mytable', meta,
  3. Column('col1', Integer),
  4. Column('col2', Integer),
  5. Column('col3', Integer),
  6. Column('col4', Integer),
  7. # place an index on col1, col2
  8. Index('idx_col12', 'col1', 'col2'),
  9. # place a unique index on col3, col4
  10. Index('idx_col34', 'col3', 'col4', unique=True)
  11. )

The Index object also supports its own create() method:

  1. i = Index('someindex', mytable.c.col5)
  2. sqli.create(engine)
  3. CREATE INDEX someindex ON mytable (col5)

Functional Indexes

Index supports SQL and function expressions, as supported by the target backend. To create an index against a column using a descending value, the ColumnElement.desc() modifier may be used:

  1. from sqlalchemy import Index
  2. Index('someindex', mytable.c.somecol.desc())

Or with a backend that supports functional indexes such as PostgreSQL, a “case insensitive” index can be created using the lower() function:

  1. from sqlalchemy import func, Index
  2. Index('someindex', func.lower(mytable.c.somecol))

Index API

Object NameDescription

Index

A table-level INDEX.

class sqlalchemy.schema.``Index(name, \expressions, **kw*)

A table-level INDEX.

Defines a composite (one or more column) INDEX.

E.g.:

  1. sometable = Table("sometable", metadata,
  2. Column("name", String(50)),
  3. Column("address", String(100))
  4. )
  5. Index("some_index", sometable.c.name)

For a no-frills, single column index, adding Column also supports index=True:

  1. sometable = Table("sometable", metadata,
  2. Column("name", String(50), index=True)
  3. )

For a composite index, multiple columns can be specified:

  1. Index("some_index", sometable.c.name, sometable.c.address)

Functional indexes are supported as well, typically by using the func construct in conjunction with table-bound Column objects:

  1. Index("some_index", func.lower(sometable.c.name))

An Index can also be manually associated with a Table, either through inline declaration or using Table.append_constraint(). When this approach is used, the names of the indexed columns can be specified as strings:

  1. Table("sometable", metadata,
  2. Column("name", String(50)),
  3. Column("address", String(100)),
  4. Index("some_index", "name", "address")
  5. )

To support functional or expression-based indexes in this form, the text() construct may be used:

  1. from sqlalchemy import text
  2. Table("sometable", metadata,
  3. Column("name", String(50)),
  4. Column("address", String(100)),
  5. Index("some_index", text("lower(name)"))
  6. )

New in version 0.9.5: the text() construct may be used to specify Index expressions, provided the Index is explicitly associated with the Table.

See also

Indexes - General information on Index.

PostgreSQL-Specific Index Options - PostgreSQL-specific options available for the Index construct.

MySQL / MariaDB- Specific Index Options - MySQL-specific options available for the Index construct.

Clustered Index Support - MSSQL-specific options available for the Index construct.

Class signature

class sqlalchemy.schema.Index (sqlalchemy.sql.base.DialectKWArgs, sqlalchemy.schema.ColumnCollectionMixin, sqlalchemy.schema.SchemaItem)

  • method sqlalchemy.schema.Index.__init__(name, \expressions, **kw*)

    Construct an index object.

    • Parameters

      • name – The name of the index

      • *expressions – Column expressions to include in the index. The expressions are normally instances of Column, but may also be arbitrary SQL expressions which ultimately refer to a Column.

      • unique=False – Keyword only argument; if True, create a unique index.

      • quote=None – Keyword only argument; whether to apply quoting to the name of the index. Works in the same manner as that of Column.quote.

      • info=None

        Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

        New in version 1.0.0.

      • **kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

  • method sqlalchemy.schema.Index.classmethod argument_for(dialect_name, argument_name, default)

    inherited from the DialectKWArgs.argument_for() method of DialectKWArgs

    Add a new kind of dialect-specific keyword argument for this class.

    E.g.:

    1. Index.argument_for("mydialect", "length", None)
    2. some_index = Index('a', 'b', mydialect_length=5)

    The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

    New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

    • Parameters

      • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.

      • argument_name – name of the parameter.

      • default – default value of the parameter.

  1. New in version 0.9.4.