Reflecting Database Objects

A Table object can be instructed to load information about itself from the corresponding database schema object already existing within the database. This process is called reflection. In the most simple case you need only specify the table name, a MetaData object, and the autoload_with argument:

  1. >>> messages = Table('messages', meta, autoload_with=engine)
  2. >>> [c.name for c in messages.columns]
  3. ['message_id', 'message_name', 'date']

The above operation will use the given engine to query the database for information about the messages table, and will then generate Column, ForeignKey, and other objects corresponding to this information as though the Table object were hand-constructed in Python.

When tables are reflected, if a given table references another one via foreign key, a second Table object is created within the MetaData object representing the connection. Below, assume the table shopping_cart_items references a table named shopping_carts. Reflecting the shopping_cart_items table has the effect such that the shopping_carts table will also be loaded:

  1. >>> shopping_cart_items = Table('shopping_cart_items', meta, autoload_with=engine)
  2. >>> 'shopping_carts' in meta.tables:
  3. True

The MetaData has an interesting “singleton-like” behavior such that if you requested both tables individually, MetaData will ensure that exactly one Table object is created for each distinct table name. The Table constructor actually returns to you the already-existing Table object if one already exists with the given name. Such as below, we can access the already generated shopping_carts table just by naming it:

  1. shopping_carts = Table('shopping_carts', meta)

Of course, it’s a good idea to use autoload_with=engine with the above table regardless. This is so that the table’s attributes will be loaded if they have not been already. The autoload operation only occurs for the table if it hasn’t already been loaded; once loaded, new calls to Table with the same name will not re-issue any reflection queries.

Overriding Reflected Columns

Individual columns can be overridden with explicit values when reflecting tables; this is handy for specifying custom datatypes, constraints such as primary keys that may not be configured within the database, etc.:

  1. >>> mytable = Table('mytable', meta,
  2. ... Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
  3. ... Column('mydata', Unicode(50)), # override reflected 'mydata' to be Unicode
  4. ... # additional Column objects which require no change are reflected normally
  5. ... autoload_with=some_engine)

See also

Working with Custom Types and Reflection - illustrates how the above column override technique applies to the use of custom datatypes with table reflection.

Reflecting Views

The reflection system can also reflect views. Basic usage is the same as that of a table:

  1. my_view = Table("some_view", metadata, autoload_with=engine)

Above, my_view is a Table object with Column objects representing the names and types of each column within the view “some_view”.

Usually, it’s desired to have at least a primary key constraint when reflecting a view, if not foreign keys as well. View reflection doesn’t extrapolate these constraints.

Use the “override” technique for this, specifying explicitly those columns which are part of the primary key or have foreign key constraints:

  1. my_view = Table("some_view", metadata,
  2. Column("view_id", Integer, primary_key=True),
  3. Column("related_thing", Integer, ForeignKey("othertable.thing_id")),
  4. autoload_with=engine
  5. )

Reflecting All Tables at Once

The MetaData object can also get a listing of tables and reflect the full set. This is achieved by using the reflect() method. After calling it, all located tables are present within the MetaData object’s dictionary of tables:

  1. meta = MetaData()
  2. meta.reflect(bind=someengine)
  3. users_table = meta.tables['users']
  4. addresses_table = meta.tables['addresses']

metadata.reflect() also provides a handy way to clear or delete all the rows in a database:

  1. meta = MetaData()
  2. meta.reflect(bind=someengine)
  3. for table in reversed(meta.sorted_tables):
  4. someengine.execute(table.delete())

Fine Grained Reflection with Inspector

A low level interface which provides a backend-agnostic system of loading lists of schema, table, column, and constraint descriptions from a given database is also available. This is known as the “Inspector”:

  1. from sqlalchemy import create_engine
  2. from sqlalchemy import inspect
  3. engine = create_engine('...')
  4. insp = inspect(engine)
  5. print(insp.get_table_names())
Object NameDescription

Inspector

Performs database schema inspection.

class sqlalchemy.engine.reflection.``Inspector(bind)

Performs database schema inspection.

The Inspector acts as a proxy to the reflection methods of the Dialect, providing a consistent interface as well as caching support for previously fetched metadata.

A Inspector object is usually created via the inspect() function, which may be passed an Engine or a Connection:

  1. from sqlalchemy import inspect, create_engine
  2. engine = create_engine('...')
  3. insp = inspect(engine)

Where above, the Dialect associated with the engine may opt to return an Inspector subclass that provides additional methods specific to the dialect’s target database.

  • method sqlalchemy.engine.reflection.Inspector.__init__(bind)

    Initialize a new Inspector.

    Deprecated since version 1.4: The __init__() method on Inspector is deprecated and will be removed in a future release. Please use the inspect() function on an Engine or Connection in order to acquire an Inspector.

    • Parameters

      bind – a Connectable, which is typically an instance of Engine or Connection.

    For a dialect-specific instance of Inspector, see Inspector.from_engine()

  • attribute sqlalchemy.engine.reflection.Inspector.default_schema_name

    Return the default schema name presented by the dialect for the current engine’s database user.

    E.g. this is typically public for PostgreSQL and dbo for SQL Server.

  • method sqlalchemy.engine.reflection.Inspector.classmethod from_engine(bind)

    Construct a new dialect-specific Inspector object from the given engine or connection.

    Deprecated since version 1.4: The from_engine() method on Inspector is deprecated and will be removed in a future release. Please use the inspect() function on an Engine or Connection in order to acquire an Inspector.

    • Parameters

      bind – a Connectable, which is typically an instance of Engine or Connection.

    This method differs from direct a direct constructor call of Inspector in that the Dialect is given a chance to provide a dialect-specific Inspector instance, which may provide additional methods.

    See the example at Inspector.

  • method sqlalchemy.engine.reflection.Inspector.get_check_constraints(table_name, schema=None, \*kw*)

    Return information about check constraints in table_name.

    Given a string table_name and an optional string schema, return check constraint information as a list of dicts with these keys:

    • name - the check constraint’s name

    • sqltext - the check constraint’s SQL expression

    • dialect_options - may or may not be present; a dictionary with additional dialect-specific options for this CHECK constraint

      New in version 1.3.8.

    • Parameters

      • table_name – string name of the table. For special quoting, use quoted_name.

      • schema – string schema name; if omitted, uses the default schema of the database connection. For special quoting, use quoted_name.

  1. New in version 1.1.0.
  • method sqlalchemy.engine.reflection.Inspector.get_columns(table_name, schema=None, \*kw*)

    Return information about columns in table_name.

    Given a string table_name and an optional string schema, return column information as a list of dicts with these keys:

    • name - the column’s name

    • type - the type of this column; an instance of TypeEngine

    • nullable - boolean flag if the column is NULL or NOT NULL

    • default - the column’s server default value - this is returned as a string SQL expression.

    • autoincrement - indicates that the column is auto incremented - this is returned as a boolean or ‘auto’

    • comment - (optional) the comment on the column. Only some dialects return this key

    • computed - (optional) when present it indicates that this column is computed by the database. Only some dialects return this key. Returned as a dict with the keys:

      • sqltext - the expression used to generate this column returned as a string SQL expression

      • persisted - (optional) boolean that indicates if the column is stored in the table

      New in version 1.3.16: - added support for computed reflection.

    • identity - (optional) when present it indicates that this column is a generated always column. Only some dialects return this key. For a list of keywords on this dict see Identity.

      New in version 1.4: - added support for identity column reflection.

    • dialect_options - (optional) a dict with dialect specific options

    • Parameters

      • table_name – string name of the table. For special quoting, use quoted_name.

      • schema – string schema name; if omitted, uses the default schema of the database connection. For special quoting, use quoted_name.

      Returns

      list of dictionaries, each representing the definition of a database column.

  • method sqlalchemy.engine.reflection.Inspector.get_foreign_keys(table_name, schema=None, \*kw*)

    Return information about foreign_keys in table_name.

    Given a string table_name, and an optional string schema, return foreign key information as a list of dicts with these keys:

    • constrained_columns - a list of column names that make up the foreign key

    • referred_schema - the name of the referred schema

    • referred_table - the name of the referred table

    • referred_columns - a list of column names in the referred table that correspond to constrained_columns

    • name - optional name of the foreign key constraint.

    • Parameters

      • table_name – string name of the table. For special quoting, use quoted_name.

      • schema – string schema name; if omitted, uses the default schema of the database connection. For special quoting, use quoted_name.

  • method sqlalchemy.engine.reflection.Inspector.get_indexes(table_name, schema=None, \*kw*)

    Return information about indexes in table_name.

    Given a string table_name and an optional string schema, return index information as a list of dicts with these keys:

    • name - the index’s name

    • column_names - list of column names in order

    • unique - boolean

    • column_sorting - optional dict mapping column names to tuple of sort keywords, which may include asc, desc, nulls_first, nulls_last.

      New in version 1.3.5.

    • dialect_options - dict of dialect-specific index options. May not be present for all dialects.

      New in version 1.0.0.

    • Parameters

      • table_name – string name of the table. For special quoting, use quoted_name.

      • schema – string schema name; if omitted, uses the default schema of the database connection. For special quoting, use quoted_name.

  • method sqlalchemy.engine.reflection.Inspector.get_pk_constraint(table_name, schema=None, \*kw*)

    Return information about primary key constraint on table_name.

    Given a string table_name, and an optional string schema, return primary key information as a dictionary with these keys:

    • constrained_columns - a list of column names that make up the primary key

    • name - optional name of the primary key constraint.

    • Parameters

      • table_name – string name of the table. For special quoting, use quoted_name.

      • schema – string schema name; if omitted, uses the default schema of the database connection. For special quoting, use quoted_name.

  • method sqlalchemy.engine.reflection.Inspector.get_schema_names()

    Return all schema names.

  • method sqlalchemy.engine.reflection.Inspector.get_sequence_names(schema=None)

    Return all sequence names in schema.

    • Parameters

      schema – Optional, retrieve names from a non-default schema. For special quoting, use quoted_name.

  • method sqlalchemy.engine.reflection.Inspector.get_sorted_table_and_fkc_names(schema=None)

    Return dependency-sorted table and foreign key constraint names in referred to within a particular schema.

    This will yield 2-tuples of (tablename, [(tname, fkname), (tname, fkname), ...]) consisting of table names in CREATE order grouped with the foreign key constraint names that are not detected as belonging to a cycle. The final element will be (None, [(tname, fkname), (tname, fkname), ..]) which will consist of remaining foreign key constraint names that would require a separate CREATE step after-the-fact, based on dependencies between tables.

    New in version 1.0.-.

    See also

    Inspector.get_table_names()

    sort_tables_and_constraints() - similar method which works with an already-given MetaData.

  • method sqlalchemy.engine.reflection.Inspector.get_table_comment(table_name, schema=None, \*kw*)

    Return information about the table comment for table_name.

    Given a string table_name and an optional string schema, return table comment information as a dictionary with these keys:

    • text -

      text of the comment.

    Raises NotImplementedError for a dialect that does not support comments.

    New in version 1.2.

  • method sqlalchemy.engine.reflection.Inspector.get_table_names(schema=None)

    Return all table names in referred to within a particular schema.

    The names are expected to be real tables only, not views. Views are instead returned using the Inspector.get_view_names() method.

    • Parameters

      schema – Schema name. If schema is left at None, the database’s default schema is used, else the named schema is searched. If the database does not support named schemas, behavior is undefined if schema is not passed as None. For special quoting, use quoted_name.

    See also

    Inspector.get_sorted_table_and_fkc_names()

    MetaData.sorted_tables

  • method sqlalchemy.engine.reflection.Inspector.get_table_options(table_name, schema=None, \*kw*)

    Return a dictionary of options specified when the table of the given name was created.

    This currently includes some options that apply to MySQL tables.

    • Parameters

      • table_name – string name of the table. For special quoting, use quoted_name.

      • schema – string schema name; if omitted, uses the default schema of the database connection. For special quoting, use quoted_name.

  • method sqlalchemy.engine.reflection.Inspector.get_temp_table_names()

    Return a list of temporary table names for the current bind.

    This method is unsupported by most dialects; currently only SQLite implements it.

    New in version 1.0.0.

  • method sqlalchemy.engine.reflection.Inspector.get_temp_view_names()

    Return a list of temporary view names for the current bind.

    This method is unsupported by most dialects; currently only SQLite implements it.

    New in version 1.0.0.

  • method sqlalchemy.engine.reflection.Inspector.get_unique_constraints(table_name, schema=None, \*kw*)

    Return information about unique constraints in table_name.

    Given a string table_name and an optional string schema, return unique constraint information as a list of dicts with these keys:

    • name - the unique constraint’s name

    • column_names - list of column names in order

    • Parameters

      • table_name – string name of the table. For special quoting, use quoted_name.

      • schema – string schema name; if omitted, uses the default schema of the database connection. For special quoting, use quoted_name.

  • method sqlalchemy.engine.reflection.Inspector.get_view_definition(view_name, schema=None)

    Return definition for view_name.

    • Parameters

      schema – Optional, retrieve names from a non-default schema. For special quoting, use quoted_name.

  • method sqlalchemy.engine.reflection.Inspector.get_view_names(schema=None)

    Return all view names in schema.

    • Parameters

      schema – Optional, retrieve names from a non-default schema. For special quoting, use quoted_name.

  • method sqlalchemy.engine.reflection.Inspector.has_sequence(sequence_name, schema=None)

    Return True if the backend has a table of the given name.

    • Parameters

      • sequence_name – name of the table to check

      • schema – schema name to query, if not the default schema.

  1. New in version 1.4.
  • method sqlalchemy.engine.reflection.Inspector.has_table(table_name, schema=None)

    Return True if the backend has a table of the given name.

    • Parameters

      • table_name – name of the table to check

      • schema – schema name to query, if not the default schema.

  1. New in version 1.4.
  • method sqlalchemy.engine.reflection.Inspector.reflect_table(table, include_columns, exclude_columns=(), resolve_fks=True, _extend_on=None)

    Given a Table object, load its internal constructs based on introspection.

    This is the underlying method used by most dialects to produce table reflection. Direct usage is like:

    1. from sqlalchemy import create_engine, MetaData, Table
    2. from sqlalchemy import inspect
    3. engine = create_engine('...')
    4. meta = MetaData()
    5. user_table = Table('user', meta)
    6. insp = inspect(engine)
    7. insp.reflect_table(user_table, None)

    Changed in version 1.4: Renamed from reflecttable to reflect_table

    • Parameters

      • table – a Table instance.

      • include_columns – a list of string column names to include in the reflection process. If None, all columns are reflected.

Reflecting with Database-Agnostic Types

When the columns of a table are reflected, using either the Table.autoload_with parameter of Table or the Inspector.get_columns() method of Inspector, the datatypes will be as specific as possible to the target database. This means that if an “integer” datatype is reflected from a MySQL database, the type will be represented by the sqlalchemy.dialects.mysql.INTEGER class, which includes MySQL-specific attributes such as “display_width”. Or on PostgreSQL, a PostgreSQL-specific datatype such as sqlalchemy.dialects.postgresql.INTERVAL or sqlalchemy.dialects.postgresql.ENUM may be returned.

There is a use case for reflection which is that a given Table is to be transferred to a different vendor database. To suit this use case, there is a technique by which these vendor-specific datatypes can be converted on the fly to be instance of SQLAlchemy backend-agnostic datatypes, for the examples above types such as Integer, Interval and Enum. This may be achieved by intercepting the column reflection using the DDLEvents.column_reflect() event in conjunction with the TypeEngine.as_generic() method.

Given a table in MySQL (chosen because MySQL has a lot of vendor-specific datatypes and options):

  1. CREATE TABLE IF NOT EXISTS my_table (
  2. id INTEGER PRIMARY KEY AUTO_INCREMENT,
  3. data1 VARCHAR(50) CHARACTER SET latin1,
  4. data2 MEDIUMINT(4),
  5. data3 TINYINT(2)
  6. )

The above table includes MySQL-only integer types MEDIUMINT and TINYINT as well as a VARCHAR that includes the MySQL-only CHARACTER SET option. If we reflect this table normally, it produces a Table object that will contain those MySQL-specific datatypes and options:

  1. >>> from sqlalchemy import MetaData, Table, create_engine
  2. >>> mysql_engine = create_engine("mysql://scott:tiger@localhost/test")
  3. >>> metadata = MetaData()
  4. >>> my_mysql_table = Table("my_table", metadata, autoload_with=mysql_engine)

The above example reflects the above table schema into a new Table object. We can then, for demonstration purposes, print out the MySQL-specific “CREATE TABLE” statement using the CreateTable construct:

  1. >>> from sqlalchemy.schema import CreateTable
  2. >>> print(CreateTable(my_mysql_table).compile(mysql_engine))
  3. CREATE TABLE my_table (
  4. id INTEGER(11) NOT NULL AUTO_INCREMENT,
  5. data1 VARCHAR(50) CHARACTER SET latin1,
  6. data2 MEDIUMINT(4),
  7. data3 TINYINT(2),
  8. PRIMARY KEY (id)
  9. )ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

Above, the MySQL-specific datatypes and options were maintained. If we wanted a Table that we could instead transfer cleanly to another database vendor, replacing the special datatypes sqlalchemy.dialects.mysql.MEDIUMINT and sqlalchemy.dialects.mysql.TINYINT with Integer, we can choose instead to “genericize” the datatypes on this table, or otherwise change them in any way we’d like, by establishing a handler using the DDLEvents.column_reflect() event. The custom handler will make use of the TypeEngine.as_generic() method to convert the above MySQL-specific type objects into generic ones, by replacing the "type" entry within the column dictionary entry that is passed to the event handler. The format of this dictionary is described at Inspector.get_columns():

  1. >>> from sqlalchemy import event
  2. >>> metadata = MetaData()
  3. >>> @event.listens_for(metadata, "column_reflect")
  4. >>> def genericize_datatypes(inspector, tablename, column_dict):
  5. ... column_dict["type"] = column_dict["type"].as_generic()
  6. >>> my_generic_table = Table("my_table", metadata, autoload_with=mysql_engine)

We now get a new Table that is generic and uses Integer for those datatypes. We can now emit a “CREATE TABLE” statement for example on a PostgreSQL database:

  1. >>> pg_engine = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
  2. >>> my_generic_table.create(pg_engine)
  3. CREATE TABLE my_table (
  4. id SERIAL NOT NULL,
  5. data1 VARCHAR(50),
  6. data2 INTEGER,
  7. data3 INTEGER,
  8. PRIMARY KEY (id)
  9. )

Noting above also that SQLAlchemy will usually make a decent guess for other behaviors, such as that the MySQL AUTO_INCREMENT directive is represented in PostgreSQL most closely using the SERIAL auto-incrementing datatype.

New in version 1.4: Added the TypeEngine.as_generic() method and additionally improved the use of the DDLEvents.column_reflect() event such that it may be applied to a MetaData object for convenience.

Limitations of Reflection

It’s important to note that the reflection process recreates Table metadata using only information which is represented in the relational database. This process by definition cannot restore aspects of a schema that aren’t actually stored in the database. State which is not available from reflection includes but is not limited to:

  • Client side defaults, either Python functions or SQL expressions defined using the default keyword of Column (note this is separate from server_default, which specifically is what’s available via reflection).

  • Column information, e.g. data that might have been placed into the Column.info dictionary

  • The value of the .quote setting for Column or Table

  • The association of a particular Sequence with a given Column

The relational database also in many cases reports on table metadata in a different format than what was specified in SQLAlchemy. The Table objects returned from reflection cannot be always relied upon to produce the identical DDL as the original Python-defined Table objects. Areas where this occurs includes server defaults, column-associated sequences and various idiosyncrasies regarding constraints and datatypes. Server side defaults may be returned with cast directives (typically PostgreSQL will include a ::<type> cast) or different quoting patterns than originally specified.

Another category of limitation includes schema structures for which reflection is only partially or not yet defined. Recent improvements to reflection allow things like views, indexes and foreign key options to be reflected. As of this writing, structures like CHECK constraints, table comments, and triggers are not reflected.