Microsoft SQL Server

Support for the Microsoft SQL Server database.

DBAPI Support

The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.

Auto Increment Behavior / IDENTITY Columns

SQL Server provides so-called “auto incrementing” behavior using theIDENTITY construct, which can be placed on any single integer column in atable. SQLAlchemy considers IDENTITY within its default “autoincrement”behavior for an integer primary key column, described atColumn.autoincrement. This means that by default, the firstinteger primary key column in a Table will be considered to be theidentity column and will generate DDL as such:

  1. from sqlalchemy import Table, MetaData, Column, Integer
  2.  
  3. m = MetaData()
  4. t = Table('t', m,
  5. Column('id', Integer, primary_key=True),
  6. Column('x', Integer))
  7. m.create_all(engine)

The above example will generate DDL as:

  1. CREATE TABLE t (
  2. id INTEGER NOT NULL IDENTITY(1,1),
  3. x INTEGER NULL,
  4. PRIMARY KEY (id)
  5. )

For the case where this default generation of IDENTITY is not desired,specify False for the Column.autoincrement flag,on the first integer primary key column:

  1. m = MetaData()
  2. t = Table('t', m,
  3. Column('id', Integer, primary_key=True, autoincrement=False),
  4. Column('x', Integer))
  5. m.create_all(engine)

To add the IDENTITY keyword to a non-primary key column, specifyTrue for the Column.autoincrement flag on the desiredColumn object, and ensure that Column.autoincrementis set to False on any integer primary key column:

  1. m = MetaData()
  2. t = Table('t', m,
  3. Column('id', Integer, primary_key=True, autoincrement=False),
  4. Column('x', Integer, autoincrement=True))
  5. m.create_all(engine)

Changed in version 1.3: Added mssql_identity_start andmssql_identity_increment parameters to Column. These replacethe use of the Sequence object in order to specify these values.

Deprecated since version 1.3: The use of Sequence to specify IDENTITY characteristics isdeprecated and will be removed in a future release. Please usethe mssql_identity_start and mssql_identity_increment parametersdocumented at Auto Increment Behavior / IDENTITY Columns.

Note

There can only be one IDENTITY column on the table. When usingautoincrement=True to enable the IDENTITY keyword, SQLAlchemy does notguard against multiple columns specifying the option simultaneously. TheSQL Server database will instead reject the CREATE TABLE statement.

Note

An INSERT statement which attempts to provide a value for a column that ismarked with IDENTITY will be rejected by SQL Server. In order for thevalue to be accepted, a session-level option “SET IDENTITY_INSERT” must beenabled. The SQLAlchemy SQL Server dialect will perform this operationautomatically when using a core Insert construct; if theexecution specifies a value for the IDENTITY column, the “IDENTITY_INSERT”option will be enabled for the span of that statement’s invocation.However,this scenario is not high performing and should not be relied upon fornormal use. If a table doesn’t actually require IDENTITY behavior in itsinteger primary key column, the keyword should be disabled when creatingthe table by ensuring that autoincrement=False is set.

Controlling “Start” and “Increment”

Specific control over the “start” and “increment” values forthe IDENTITY generator are provided using themssql_identity_start and mssql_identity_increment parameterspassed to the Column object:

  1. from sqlalchemy import Table, Integer, Column
  2.  
  3. test = Table(
  4. 'test', metadata,
  5. Column(
  6. 'id', Integer, primary_key=True, mssql_identity_start=100,
  7. mssql_identity_increment=10
  8. ),
  9. Column('name', String(20))
  10. )

The CREATE TABLE for the above Table object would be:

  1. CREATE TABLE test (
  2. id INTEGER NOT NULL IDENTITY(100,10) PRIMARY KEY,
  3. name VARCHAR(20) NULL,
  4. )

Changed in version 1.3: The mssql_identity_start andmssql_identity_increment parameters are now used to affect theIDENTITY generator for a Column under SQL Server.Previously, the Sequence object was used. As SQL Server nowsupports real sequences as a separate construct, Sequence will befunctional in the normal way in a future SQLAlchemy version.

INSERT behavior

Handling of the IDENTITY column at INSERT time involves two keytechniques. The most common is being able to fetch the “last inserted value”for a given IDENTITY column, a process which SQLAlchemy performsimplicitly in many cases, most importantly within the ORM.

The process for fetching this value has several variants:

  • In the vast majority of cases, RETURNING is used in conjunction with INSERTstatements on SQL Server in order to get newly generated primary key values:
  1. INSERT INTO t (x) OUTPUT inserted.id VALUES (?)
  • When RETURNING is not available or has been disabled viaimplicit_returning=False, either the scope_identity() function orthe @@identity variable is used; behavior varies by backend:

    • when using PyODBC, the phrase ; select scope_identity() will beappended to the end of the INSERT statement; a second result set will befetched in order to receive the value. Given a table as:
  1. t = Table('t', m, Column('id', Integer, primary_key=True),
  2. Column('x', Integer),
  3. implicit_returning=False)

an INSERT will look like:

  1. INSERT INTO t (x) VALUES (?); select scope_identity()
  • Other dialects such as pymssql will call uponSELECT scope_identity() AS lastrowid subsequent to an INSERTstatement. If the flag use_scope_identity=False is passed tocreate_engine(), the statement SELECT @@identity AS lastrowidis used instead.

A table that contains an IDENTITY column will prohibit an INSERT statementthat refers to the identity column explicitly. The SQLAlchemy dialect willdetect when an INSERT construct, created using a core insert()construct (not a plain string SQL), refers to the identity column, andin this case will emit SET IDENTITY_INSERT ON prior to the insertstatement proceeding, and SET IDENTITY_INSERT OFF subsequent to theexecution. Given this example:

  1. m = MetaData()
  2. t = Table('t', m, Column('id', Integer, primary_key=True),
  3. Column('x', Integer))
  4. m.create_all(engine)
  5.  
  6. engine.execute(t.insert(), {'id': 1, 'x':1}, {'id':2, 'x':2})

The above column will be created with IDENTITY, however the INSERT statementwe emit is specifying explicit values. In the echo output we can seehow SQLAlchemy handles this:

  1. CREATE TABLE t (
  2. id INTEGER NOT NULL IDENTITY(1,1),
  3. x INTEGER NULL,
  4. PRIMARY KEY (id)
  5. )
  6.  
  7. COMMIT
  8. SET IDENTITY_INSERT t ON
  9. INSERT INTO t (id, x) VALUES (?, ?)
  10. ((1, 1), (2, 2))
  11. SET IDENTITY_INSERT t OFF
  12. COMMIT

Thisis an auxiliary use case suitable for testing and bulk insert scenarios.

MAX on VARCHAR / NVARCHAR

SQL Server supports the special string “MAX” within thesqltypes.VARCHAR and sqltypes.NVARCHAR datatypes,to indicate “maximum length possible”. The dialect currently handles this asa length of “None” in the base type, rather than supplying adialect-specific version of these types, so that a base typespecified such as VARCHAR(None) can assume “unlengthed” behavior onmore than one backend without using dialect-specific types.

To build a SQL Server VARCHAR or NVARCHAR with MAX length, use None:

  1. my_table = Table(
  2. 'my_table', metadata,
  3. Column('my_data', VARCHAR(None)),
  4. Column('my_n_data', NVARCHAR(None))
  5. )

Collation Support

Character collations are supported by the base string types,specified by the string argument “collation”:

  1. from sqlalchemy import VARCHAR
  2. Column('login', VARCHAR(32, collation='Latin1_General_CI_AS'))

When such a column is associated with a Table, theCREATE TABLE statement for this column will yield:

  1. login VARCHAR(32) COLLATE Latin1_General_CI_AS NULL

LIMIT/OFFSET Support

MSSQL has no support for the LIMIT or OFFSET keywords. LIMIT issupported directly through the TOP Transact SQL keyword:

  1. select.limit

will yield:

  1. SELECT TOP n

If using SQL Server 2005 or above, LIMIT with OFFSETsupport is available through the ROW_NUMBER OVER construct.For versions below 2005, LIMIT with OFFSET usage will fail.

Transaction Isolation Level

All SQL Server dialects support setting of transaction isolation levelboth via a dialect-specific parametercreate_engine.isolation_levelaccepted by create_engine(),as well as the Connection.execution_options.isolation_levelargument as passed toConnection.execution_options(). This feature works by issuing thecommand SET TRANSACTION ISOLATION LEVEL <level> foreach new connection.

To set isolation level using create_engine():

  1. engine = create_engine(
  2. "mssql+pyodbc://scott:tiger@ms_2008",
  3. isolation_level="REPEATABLE READ"
  4. )

To set using per-connection execution options:

  1. connection = engine.connect()
  2. connection = connection.execution_options(
  3. isolation_level="READ COMMITTED"
  4. )

Valid values for isolation_level include:

  • AUTOCOMMIT - pyodbc / pymssql-specific

  • READ COMMITTED

  • READ UNCOMMITTED

  • REPEATABLE READ

  • SERIALIZABLE

  • SNAPSHOT - specific to SQL Server

New in version 1.1: support for isolation level setting on MicrosoftSQL Server.

New in version 1.2: added AUTOCOMMIT isolation level setting

Nullability

MSSQL has support for three levels of column nullability. The defaultnullability allows nulls and is explicit in the CREATE TABLEconstruct:

  1. name VARCHAR(20) NULL

If nullable=None is specified then no specification is made. Inother words the database’s configured default is used. This willrender:

  1. name VARCHAR(20)

If nullable is True or False then the column will beNULL or NOT NULL respectively.

Date / Time Handling

DATE and TIME are supported. Bind parameters are convertedto datetime.datetime() objects as required by most MSSQL drivers,and results are processed from strings if needed.The DATE and TIME types are not available for MSSQL 2005 andprevious - if a server version below 2008 is detected, DDLfor these types will be issued as DATETIME.

Large Text/Binary Type Deprecation

PerSQL Server 2012/2014 Documentation,the NTEXT, TEXT and IMAGE datatypes are to be removed from SQLServer in a future release. SQLAlchemy normally relates these types to theUnicodeText, Text and LargeBinary datatypes.

In order to accommodate this change, a new flag deprecate_large_typesis added to the dialect, which will be automatically set based on detectionof the server version in use, if not otherwise set by the user. Thebehavior of this flag is as follows:

  • When this flag is True, the UnicodeText, Text andLargeBinary datatypes, when used to render DDL, will render thetypes NVARCHAR(max), VARCHAR(max), and VARBINARY(max),respectively. This is a new behavior as of the addition of this flag.

  • When this flag is False, the UnicodeText, Text andLargeBinary datatypes, when used to render DDL, will render thetypes NTEXT, TEXT, and IMAGE,respectively. This is the long-standing behavior of these types.

  • The flag begins with the value None, before a database connection isestablished. If the dialect is used to render DDL without the flag beingset, it is interpreted the same as False.

  • On first connection, the dialect detects if SQL Server version 2012 orgreater is in use; if the flag is still at None, it sets it to Trueor False based on whether 2012 or greater is detected.

  • The flag can be set to either True or False when the dialectis created, typically via create_engine():

  1. eng = create_engine("mssql+pymssql://user:pass@host/db",
  2. deprecate_large_types=True)
  • Complete control over whether the “old” or “new” types are rendered isavailable in all SQLAlchemy versions by using the UPPERCASE type objectsinstead: NVARCHAR, VARCHAR, types.VARBINARY,TEXT, mssql.NTEXT, mssql.IMAGE will alwaysremain fixed and always output exactly that type.

New in version 1.0.0.

Multipart Schema Names

SQL Server schemas sometimes require multiple parts to their “schema”qualifier, that is, including the database name and owner name as separatetokens, such as mydatabase.dbo.some_table. These multipart names can be setat once using the Table.schema argument of Table:

  1. Table(
  2. "some_table", metadata,
  3. Column("q", String(50)),
  4. schema="mydatabase.dbo"
  5. )

When performing operations such as table or component reflection, a schemaargument that contains a dot will be split into separate“database” and “owner” components in order to correctly query the SQLServer information schema tables, as these two values are stored separately.Additionally, when rendering the schema name for DDL or SQL, the twocomponents will be quoted separately for case sensitive names and otherspecial characters. Given an argument as below:

  1. Table(
  2. "some_table", metadata,
  3. Column("q", String(50)),
  4. schema="MyDataBase.dbo"
  5. )

The above schema would be rendered as [MyDataBase].dbo, and also inreflection, would be reflected using “dbo” as the owner and “MyDataBase”as the database name.

To control how the schema name is broken into database / owner,specify brackets (which in SQL Server are quoting characters) in the name.Below, the “owner” will be considered as MyDataBase.dbo and the“database” will be None:

  1. Table(
  2. "some_table", metadata,
  3. Column("q", String(50)),
  4. schema="[MyDataBase.dbo]"
  5. )

To individually specify both database and owner name with special charactersor embedded dots, use two sets of brackets:

  1. Table(
  2. "some_table", metadata,
  3. Column("q", String(50)),
  4. schema="[MyDataBase.Period].[MyOwner.Dot]"
  5. )

Changed in version 1.2: the SQL Server dialect now treats brackets asidentifier delimeters splitting the schema into separate databaseand owner tokens, to allow dots within either name itself.

Legacy Schema Mode

Very old versions of the MSSQL dialect introduced the behavior such that aschema-qualified table would be auto-aliased when used in aSELECT statement; given a table:

  1. account_table = Table(
  2. 'account', metadata,
  3. Column('id', Integer, primary_key=True),
  4. Column('info', String(100)),
  5. schema="customer_schema"
  6. )

this legacy mode of rendering would assume that “customer_schema.account”would not be accepted by all parts of the SQL statement, as illustratedbelow:

  1. >>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=True)
  2. >>> print(account_table.select().compile(eng))
  3. SELECT account_1.id, account_1.info
  4. FROM customer_schema.account AS account_1

This mode of behavior is now off by default, as it appears to have servedno purpose; however in the case that legacy applications rely upon it,it is available using the legacy_schema_aliasing argument tocreate_engine() as illustrated above.

Changed in version 1.1: the legacy_schema_aliasing flag introducedin version 1.0.5 to allow disabling of legacy mode for schemas nowdefaults to False.

Clustered Index Support

The MSSQL dialect supports clustered indexes (and primary keys) via themssql_clustered option. This option is available to Index,UniqueConstraint. and PrimaryKeyConstraint.

To generate a clustered index:

  1. Index("my_index", table.c.x, mssql_clustered=True)

which renders the index as CREATE CLUSTERED INDEX my_index ON table (x).

To generate a clustered primary key use:

  1. Table('my_table', metadata,
  2. Column('x', ...),
  3. Column('y', ...),
  4. PrimaryKeyConstraint("x", "y", mssql_clustered=True))

which will render the table, for example, as:

  1. CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
  2. PRIMARY KEY CLUSTERED (x, y))

Similarly, we can generate a clustered unique constraint using:

  1. Table('my_table', metadata,
  2. Column('x', ...),
  3. Column('y', ...),
  4. PrimaryKeyConstraint("x"),
  5. UniqueConstraint("y", mssql_clustered=True),
  6. )

To explicitly request a non-clustered primary key (for example, whena separate clustered index is desired), use:

  1. Table('my_table', metadata,
  2. Column('x', ...),
  3. Column('y', ...),
  4. PrimaryKeyConstraint("x", "y", mssql_clustered=False))

which will render the table, for example, as:

  1. CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
  2. PRIMARY KEY NONCLUSTERED (x, y))

Changed in version 1.1: the mssql_clustered option now defaultsto None, rather than False. mssql_clustered=False now explicitlyrenders the NONCLUSTERED clause, whereas None omits the CLUSTEREDclause entirely, allowing SQL Server defaults to take effect.

MSSQL-Specific Index Options

In addition to clustering, the MSSQL dialect supports other special optionsfor Index.

INCLUDE

The mssql_include option renders INCLUDE(colname) for the given stringnames:

  1. Index("my_index", table.c.x, mssql_include=['y'])

would render the index as CREATE INDEX my_index ON table (x) INCLUDE (y)

Filtered Indexes

The mssql_where option renders WHERE(condition) for the given stringnames:

  1. Index("my_index", table.c.x, mssql_where=table.c.x > 10)

would render the index as CREATE INDEX my_index ON table (x) WHERE x > 10.

New in version 1.3.4.

Index ordering

Index ordering is available via functional expressions, such as:

  1. Index("my_index", table.c.x.desc())

would render the index as CREATE INDEX my_index ON table (x DESC)

See also

Functional Indexes

Compatibility Levels

MSSQL supports the notion of setting compatibility levels at thedatabase level. This allows, for instance, to run a database thatis compatible with SQL2000 while running on a SQL2005 databaseserver. server_version_info will always return the databaseserver version information (in this case SQL2005) and not thecompatibility level information. Because of this, if running undera backwards compatibility mode SQLAlchemy may attempt to use T-SQLstatements that are unable to be parsed by the database server.

Triggers

SQLAlchemy by default uses OUTPUT INSERTED to get at newlygenerated primary key values via IDENTITY columns or otherserver side defaults. MS-SQL does notallow the usage of OUTPUT INSERTED on tables that have triggers.To disable the usage of OUTPUT INSERTED on a per-table basis,specify implicit_returning=False for each Tablewhich has triggers:

  1. Table('mytable', metadata,
  2. Column('id', Integer, primary_key=True),
  3. # ...,
  4. implicit_returning=False
  5. )

Declarative form:

  1. class MyClass(Base):
  2. # ...
  3. __table_args__ = {'implicit_returning':False}

This option can also be specified engine-wide using theimplicit_returning=False argument on create_engine().

Rowcount Support / ORM Versioning

The SQL Server drivers may have limited ability to return the numberof rows updated from an UPDATE or DELETE statement.

As of this writing, the PyODBC driver is not able to return a rowcount whenOUTPUT INSERTED is used. This impacts the SQLAlchemy ORM’s versioning featurein many cases where server-side value generators are in use in that while theversioning operations can succeed, the ORM cannot always check that an UPDATEor DELETE statement matched the number of rows expected, which is how itverifies that the version identifier matched. When this condition occurs, awarning will be emitted but the operation will proceed.

The use of OUTPUT INSERTED can be disabled by setting theTable.implicit_returning flag to False on a particularTable, which in declarative looks like:

  1. class MyTable(Base):
  2. __tablename__ = 'mytable'
  3. id = Column(Integer, primary_key=True)
  4. stuff = Column(String(10))
  5. timestamp = Column(TIMESTAMP(), default=text('DEFAULT'))
  6. __mapper_args__ = {
  7. 'version_id_col': timestamp,
  8. 'version_id_generator': False,
  9. }
  10. __table_args__ = {
  11. 'implicit_returning': False
  12. }

Enabling Snapshot Isolation

SQL Server has a default transactionisolation mode that locks entire tables, and causes even mildly concurrentapplications to have long held locks and frequent deadlocks.Enabling snapshot isolation for the database as a whole is recommendedfor modern levels of concurrency support. This is accomplished via thefollowing ALTER DATABASE commands executed at the SQL prompt:

  1. ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
  2.  
  3. ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON

Background on SQL Server snapshot isolation is available athttp://msdn.microsoft.com/en-us/library/ms175095.aspx.

SQL Server SQL Constructs

  • sqlalchemy.dialects.mssql.trycast(arg, *kw_)
  • Create a TRY_CAST expression.

TryCast is a subclass of SQLAlchemy’s Castconstruct, and works in the same way, except that the SQL expressionrendered is “TRY_CAST” rather than “CAST”:

  1. from sqlalchemy import select
  2. from sqlalchemy import Numeric
  3. from sqlalchemy.dialects.mssql import try_cast
  4.  
  5. stmt = select([
  6. try_cast(product_table.c.unit_price, Numeric(10, 4))
  7. ])

The above would render:

  1. SELECT TRY_CAST (product_table.unit_price AS NUMERIC(10, 4))
  2. FROM product_table

New in version 1.3.7.

SQL Server Data Types

As with all SQLAlchemy dialects, all UPPERCASE types that are known to bevalid with SQL server are importable from the top level dialect, whetherthey originate from sqlalchemy.types or from the local dialect:

  1. from sqlalchemy.dialects.mssql import \
  2. BIGINT, BINARY, BIT, CHAR, DATE, DATETIME, DATETIME2, \
  3. DATETIMEOFFSET, DECIMAL, FLOAT, IMAGE, INTEGER, MONEY, \
  4. NCHAR, NTEXT, NUMERIC, NVARCHAR, REAL, SMALLDATETIME, \
  5. SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \
  6. TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR

Types which are specific to SQL Server, or have SQL Server-specificconstruction arguments, are as follows:

inherited from the init() method of object

Initialize self. See help(type(self)) for accurate signature.

  • class sqlalchemy.dialects.mssql.CHAR(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)
  • Bases: sqlalchemy.types.String

The SQL CHAR type.

  • init(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)

inherited from theinit()method ofString

Create a string-holding type.

  1. - Parameters
  2. -
  3. -

length – optional, a length for the column for use inDDL and CAST expressions. May be safely omitted if no CREATETABLE will be issued. Certain databases may require alength for use in DDL, and will raise an exception whenthe CREATE TABLE DDL is issued if a VARCHARwith no length is included. Whether the value isinterpreted as bytes or characters is database specific.

  1. -

collation

Optional, a column-level collation foruse in DDL and CAST expressions. Renders using theCOLLATE keyword supported by SQLite, MySQL, and PostgreSQL.E.g.:

  1. >>> from sqlalchemy import cast, select, String
  2. >>> print select([cast('some string', String(collation='utf8'))])
  3. SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
  1. -

convert_unicode

When set to True, theString type will assume thatinput is to be passed as Python Unicode objects under Python 2,and results returned as Python Unicode objects.In the rare circumstance that the DBAPI does not supportPython unicode under Python 2, SQLAlchemy will use its ownencoder/decoder functionality on strings, referring to thevalue of the create_engine.encoding parameterparameter passed to create_engine() as the encoding.

Deprecated since version 1.3: The String.convert_unicode parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.

For the extremely rare case that Python Unicodeis to be encoded/decoded by SQLAlchemy on a backendthat does natively support Python Unicode,the string value "force" can be passed here which willcause SQLAlchemy’s encode/decode services to beused unconditionally.

Note

SQLAlchemy’s unicode-conversion flags and features only applyto Python 2; in Python 3, all string objects are Unicode objects.For this reason, as well as the fact that virtually all modernDBAPIs now support Unicode natively even under Python 2,the String.convert_unicode flag is inherently alegacy feature.

Note

In the vast majority of cases, the Unicode orUnicodeText datatypes should be used for aColumn that expects to store non-ascii data. Thesedatatypes will ensure that the correct types are used on thedatabase side as well as set up the correct Unicode behaviorsunder Python 2.

See also

create_engine.convert_unicode -Engine-wide parameter

  1. -

unicode_error

Optional, a method to use to handle Unicodeconversion errors. Behaves like the errors keyword argument tothe standard library’s string.decode() functions, requiresthat String.convert_unicode is set to"force"

Deprecated since version 1.3: The String.unicode_errors parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.

  • class sqlalchemy.dialects.mssql.DATETIME2(precision=None, **kw)
  • Bases: sqlalchemy.dialects.mssql.base._DateTimeBase, sqlalchemy.types.DateTime

    • init(precision=None, **kw)
    • Initialize self. See help(type(self)) for accurate signature.
  • class sqlalchemy.dialects.mssql.DATETIMEOFFSET(precision=None, **kwargs)

  • Bases: sqlalchemy.types.TypeEngine

    • init(precision=None, **kwargs)
    • Initialize self. See help(type(self)) for accurate signature.
  • class sqlalchemy.dialects.mssql.IMAGE(length=None)

  • Bases: sqlalchemy.types.LargeBinary

    • init(length=None)

inherited from theinit()method ofLargeBinary

Construct a LargeBinary type.

  1. - Parameters
  2. -

length – optional, a length for the column for use inDDL statements, for those binary types that accept a length,such as the MySQL BLOB type.

inherited from the init() method of object

Initialize self. See help(type(self)) for accurate signature.

The SQL NCHAR type.

  • init(length=None, **kwargs)

inherited from theinit()method ofUnicode

Create a Unicode object.

Parameters are the same as that of String,with the exception that convert_unicodedefaults to True.

MSSQL NTEXT type, for variable-length unicode text up to 2^30characters.

  • init(length=None, **kwargs)

inherited from theinit()method ofUnicodeText

Create a Unicode-converting Text type.

Parameters are the same as that of Text,with the exception that convert_unicodedefaults to True.

The SQL NVARCHAR type.

  • init(length=None, **kwargs)

inherited from theinit()method ofUnicode

Create a Unicode object.

Parameters are the same as that of String,with the exception that convert_unicodedefaults to True.

  • class sqlalchemy.dialects.mssql.REAL(**kw)
  • Bases: sqlalchemy.types.REAL

    • init(**kw)
    • Construct a Float.

      • Parameters
        • precision – the numeric precision for use in DDL CREATETABLE.

        • asdecimal – the same flag as that of Numeric, butdefaults to False. Note that setting this flag to Trueresults in floating point conversion.

        • decimal_return_scale

Default scale to use when convertingfrom floats to Python decimals. Floating point values will typicallybe much longer due to decimal inaccuracy, and most floating pointdatabase types don’t have a notion of “scale”, so by default thefloat type looks for the first ten decimal places when converting.Specifying this value will override that length. Note that theMySQL float types, which do include “scale”, will use “scale”as the default for decimal_return_scale, if not otherwise specified.

New in version 0.9.0.

  • class sqlalchemy.dialects.mssql.ROWVERSION(convert_int=False)
  • Bases: sqlalchemy.dialects.mssql.base.TIMESTAMP

Implement the SQL Server ROWVERSION type.

The ROWVERSION datatype is a SQL Server synonym for the TIMESTAMPdatatype, however current SQL Server documentation suggests usingROWVERSION for new datatypes going forward.

The ROWVERSION datatype does not reflect (e.g. introspect) from thedatabase as itself; the returned datatype will bemssql.TIMESTAMP.

This is a read-only datatype that does not support INSERT of values.

New in version 1.2.

See also

mssql.TIMESTAMP

  • init(convert_int=False)

inherited from the init() method of TIMESTAMP

Construct a TIMESTAMP or ROWVERSION type.

  1. - Parameters
  2. -

convert_int – if True, binary integer values willbe converted to integers on read.

New in version 1.2.

  • class sqlalchemy.dialects.mssql.SMALLDATETIME(timezone=False)
  • Bases: sqlalchemy.dialects.mssql.base._DateTimeBase, sqlalchemy.types.DateTime

    • init(timezone=False)

inherited from theinit()method ofDateTime

Construct a new DateTime.

  1. - Parameters
  2. -

timezone – boolean. Indicates that the datetime type shouldenable timezone support, if available on thebase date/time-holding type only. It is recommendedto make use of the TIMESTAMP datatype directly whenusing this flag, as some databases include separate genericdate/time-holding types distinct from the timezone-capableTIMESTAMP datatype, such as Oracle.

inherited from the init() method of object

Initialize self. See help(type(self)) for accurate signature.

inherited from the init() method of object

Initialize self. See help(type(self)) for accurate signature.

  • class sqlalchemy.dialects.mssql.TEXT(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)
  • Bases: sqlalchemy.types.Text

The SQL TEXT type.

  • init(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)

inherited from theinit()method ofString

Create a string-holding type.

  1. - Parameters
  2. -
  3. -

length – optional, a length for the column for use inDDL and CAST expressions. May be safely omitted if no CREATETABLE will be issued. Certain databases may require alength for use in DDL, and will raise an exception whenthe CREATE TABLE DDL is issued if a VARCHARwith no length is included. Whether the value isinterpreted as bytes or characters is database specific.

  1. -

collation

Optional, a column-level collation foruse in DDL and CAST expressions. Renders using theCOLLATE keyword supported by SQLite, MySQL, and PostgreSQL.E.g.:

  1. >>> from sqlalchemy import cast, select, String
  2. >>> print select([cast('some string', String(collation='utf8'))])
  3. SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
  1. -

convert_unicode

When set to True, theString type will assume thatinput is to be passed as Python Unicode objects under Python 2,and results returned as Python Unicode objects.In the rare circumstance that the DBAPI does not supportPython unicode under Python 2, SQLAlchemy will use its ownencoder/decoder functionality on strings, referring to thevalue of the create_engine.encoding parameterparameter passed to create_engine() as the encoding.

Deprecated since version 1.3: The String.convert_unicode parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.

For the extremely rare case that Python Unicodeis to be encoded/decoded by SQLAlchemy on a backendthat does natively support Python Unicode,the string value "force" can be passed here which willcause SQLAlchemy’s encode/decode services to beused unconditionally.

Note

SQLAlchemy’s unicode-conversion flags and features only applyto Python 2; in Python 3, all string objects are Unicode objects.For this reason, as well as the fact that virtually all modernDBAPIs now support Unicode natively even under Python 2,the String.convert_unicode flag is inherently alegacy feature.

Note

In the vast majority of cases, the Unicode orUnicodeText datatypes should be used for aColumn that expects to store non-ascii data. Thesedatatypes will ensure that the correct types are used on thedatabase side as well as set up the correct Unicode behaviorsunder Python 2.

See also

create_engine.convert_unicode -Engine-wide parameter

  1. -

unicode_error

Optional, a method to use to handle Unicodeconversion errors. Behaves like the errors keyword argument tothe standard library’s string.decode() functions, requiresthat String.convert_unicode is set to"force"

Deprecated since version 1.3: The String.unicode_errors parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.

  • class sqlalchemy.dialects.mssql.TIME(precision=None, **kwargs)
  • Bases: sqlalchemy.types.TIME

    • init(precision=None, **kwargs)
    • Initialize self. See help(type(self)) for accurate signature.
  • class sqlalchemy.dialects.mssql.TIMESTAMP(convert_int=False)

  • Bases: sqlalchemy.types._Binary

Implement the SQL Server TIMESTAMP type.

Note this is completely different than the SQL StandardTIMESTAMP type, which is not supported by SQL Server. Itis a read-only datatype that does not support INSERT of values.

New in version 1.2.

See also

mssql.ROWVERSION

  • init(convert_int=False)
  • Construct a TIMESTAMP or ROWVERSION type.

    • Parameters
    • convert_int – if True, binary integer values willbe converted to integers on read.

New in version 1.2.

inherited from the init() method of object

Initialize self. See help(type(self)) for accurate signature.

inherited from the init() method of object

Initialize self. See help(type(self)) for accurate signature.

  • class sqlalchemy.dialects.mssql.VARCHAR(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)
  • Bases: sqlalchemy.types.String

The SQL VARCHAR type.

  • init(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)

inherited from theinit()method ofString

Create a string-holding type.

  1. - Parameters
  2. -
  3. -

length – optional, a length for the column for use inDDL and CAST expressions. May be safely omitted if no CREATETABLE will be issued. Certain databases may require alength for use in DDL, and will raise an exception whenthe CREATE TABLE DDL is issued if a VARCHARwith no length is included. Whether the value isinterpreted as bytes or characters is database specific.

  1. -

collation

Optional, a column-level collation foruse in DDL and CAST expressions. Renders using theCOLLATE keyword supported by SQLite, MySQL, and PostgreSQL.E.g.:

  1. >>> from sqlalchemy import cast, select, String
  2. >>> print select([cast('some string', String(collation='utf8'))])
  3. SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
  1. -

convert_unicode

When set to True, theString type will assume thatinput is to be passed as Python Unicode objects under Python 2,and results returned as Python Unicode objects.In the rare circumstance that the DBAPI does not supportPython unicode under Python 2, SQLAlchemy will use its ownencoder/decoder functionality on strings, referring to thevalue of the create_engine.encoding parameterparameter passed to create_engine() as the encoding.

Deprecated since version 1.3: The String.convert_unicode parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.

For the extremely rare case that Python Unicodeis to be encoded/decoded by SQLAlchemy on a backendthat does natively support Python Unicode,the string value "force" can be passed here which willcause SQLAlchemy’s encode/decode services to beused unconditionally.

Note

SQLAlchemy’s unicode-conversion flags and features only applyto Python 2; in Python 3, all string objects are Unicode objects.For this reason, as well as the fact that virtually all modernDBAPIs now support Unicode natively even under Python 2,the String.convert_unicode flag is inherently alegacy feature.

Note

In the vast majority of cases, the Unicode orUnicodeText datatypes should be used for aColumn that expects to store non-ascii data. Thesedatatypes will ensure that the correct types are used on thedatabase side as well as set up the correct Unicode behaviorsunder Python 2.

See also

create_engine.convert_unicode -Engine-wide parameter

  1. -

unicode_error

Optional, a method to use to handle Unicodeconversion errors. Behaves like the errors keyword argument tothe standard library’s string.decode() functions, requiresthat String.convert_unicode is set to"force"

Deprecated since version 1.3: The String.unicode_errors parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.

  • class sqlalchemy.dialects.mssql.XML(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)
  • Bases: sqlalchemy.types.Text

MSSQL XML type.

This is a placeholder type for reflection purposes that does not includeany Python-side datatype support. It also does not currently supportadditional arguments, such as “CONTENT”, “DOCUMENT”,“xml_schema_collection”.

New in version 1.1.11.

  • init(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)

inherited from theinit()method ofString

Create a string-holding type.

  1. - Parameters
  2. -
  3. -

length – optional, a length for the column for use inDDL and CAST expressions. May be safely omitted if no CREATETABLE will be issued. Certain databases may require alength for use in DDL, and will raise an exception whenthe CREATE TABLE DDL is issued if a VARCHARwith no length is included. Whether the value isinterpreted as bytes or characters is database specific.

  1. -

collation

Optional, a column-level collation foruse in DDL and CAST expressions. Renders using theCOLLATE keyword supported by SQLite, MySQL, and PostgreSQL.E.g.:

  1. >>> from sqlalchemy import cast, select, String
  2. >>> print select([cast('some string', String(collation='utf8'))])
  3. SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
  1. -

convert_unicode

When set to True, theString type will assume thatinput is to be passed as Python Unicode objects under Python 2,and results returned as Python Unicode objects.In the rare circumstance that the DBAPI does not supportPython unicode under Python 2, SQLAlchemy will use its ownencoder/decoder functionality on strings, referring to thevalue of the create_engine.encoding parameterparameter passed to create_engine() as the encoding.

Deprecated since version 1.3: The String.convert_unicode parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.

For the extremely rare case that Python Unicodeis to be encoded/decoded by SQLAlchemy on a backendthat does natively support Python Unicode,the string value "force" can be passed here which willcause SQLAlchemy’s encode/decode services to beused unconditionally.

Note

SQLAlchemy’s unicode-conversion flags and features only applyto Python 2; in Python 3, all string objects are Unicode objects.For this reason, as well as the fact that virtually all modernDBAPIs now support Unicode natively even under Python 2,the String.convert_unicode flag is inherently alegacy feature.

Note

In the vast majority of cases, the Unicode orUnicodeText datatypes should be used for aColumn that expects to store non-ascii data. Thesedatatypes will ensure that the correct types are used on thedatabase side as well as set up the correct Unicode behaviorsunder Python 2.

See also

create_engine.convert_unicode -Engine-wide parameter

  1. -

unicode_error

Optional, a method to use to handle Unicodeconversion errors. Behaves like the errors keyword argument tothe standard library’s string.decode() functions, requiresthat String.convert_unicode is set to"force"

Deprecated since version 1.3: The String.unicode_errors parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.

PyODBC

Support for the Microsoft SQL Server database via the PyODBC driver.

DBAPI

Documentation and download information (if applicable) for PyODBC is available at:http://pypi.python.org/pypi/pyodbc/

Connecting

Connect String:

  1. mssql+pyodbc://<username>:<password>@<dsnname>

Connecting to PyODBC

The URL here is to be translated to PyODBC connection strings, asdetailed in ConnectionStrings.

DSN Connections

A DSN-based connection is preferred overall when using ODBC. Abasic DSN-based connection looks like:

  1. engine = create_engine("mssql+pyodbc://scott:tiger@some_dsn")

Which above, will pass the following connection string to PyODBC:

  1. dsn=mydsn;UID=user;PWD=pass

If the username and password are omitted, the DSN form will also addthe Trusted_Connection=yes directive to the ODBC string.

Hostname Connections

Hostname-based connections are not preferred, however are supported.The ODBC driver name must be explicitly specified:

  1. engine = create_engine("mssql+pyodbc://scott:tiger@myhost:port/databasename?driver=SQL+Server+Native+Client+10.0")

Changed in version 1.0.0: Hostname-based PyODBC connections now require theSQL Server driver name specified explicitly. SQLAlchemy cannotchoose an optimal default here as it varies based on platformand installed drivers.

Other keywords interpreted by the Pyodbc dialect to be passed topyodbc.connect() in both the DSN and hostname cases include:odbc_autotranslate, ansi, unicode_results, autocommit.

Pass through exact Pyodbc string

A PyODBC connection string can also be sent exactly as specified inConnectionStringsinto the driver using the parameter odbc_connect. The delimeters must beURL escaped, however, as illustrated below using urllib.parse.quote_plus:

  1. import urllib
  2. params = urllib.parse.quote_plus("DRIVER={SQL Server Native Client 10.0};SERVER=dagger;DATABASE=test;UID=user;PWD=password")
  3.  
  4. engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)

Driver / Unicode Support

PyODBC works best with Microsoft ODBC drivers, particularly in the areaof Unicode support on both Python 2 and Python 3.

Using the FreeTDS ODBC drivers on Linux or OSX with PyODBC is notrecommended; there have been historically many Unicode-related issuesin this area, including before Microsoft offered ODBC drivers for Linuxand OSX. Now that Microsoft offers drivers for all platforms, forPyODBC support these are recommended. FreeTDS remains relevant fornon-ODBC drivers such as pymssql where it works very well.

Rowcount Support

Pyodbc only has partial support for rowcount. See the notes atRowcount Support / ORM Versioning for important notes when using ORMversioning.

Fast Executemany Mode

The Pyodbc driver has added support for a “fast executemany” mode of executionwhich greatly reduces round trips for a DBAPI executemany() call when usingMicrosoft ODBC drivers. The feature is enabled by setting the flag.fast_executemany on the DBAPI cursor when an executemany call is to beused. The SQLAlchemy pyodbc SQL Server dialect supports setting this flagautomatically when the .fast_executemany flag is passed tocreate_engine(); note that the ODBC driver must be the Microsoft driverin order to use this flag:

  1. engine = create_engine(
  2. "mssql+pyodbc://scott:tiger@mssql2017:1433/test?driver=ODBC+Driver+13+for+SQL+Server",
  3. fast_executemany=True)

New in version 1.3.

See also

fast executemany- on github

mxODBC

Support for the Microsoft SQL Server database via the mxODBC driver.

DBAPI

Documentation and download information (if applicable) for mxODBC is available at:http://www.egenix.com/

Connecting

Connect String:

  1. mssql+mxodbc://<username>:<password>@<dsnname>

Execution Modes

mxODBC features two styles of statement execution, using thecursor.execute() and cursor.executedirect() methods (the second beingan extension to the DBAPI specification). The former makes use of a particularAPI call specific to the SQL Server Native Client ODBC driver knownSQLDescribeParam, while the latter does not.

mxODBC apparently only makes repeated use of a single prepared statementwhen SQLDescribeParam is used. The advantage to prepared statement reuse isone of performance. The disadvantage is that SQLDescribeParam has a limitedset of scenarios in which bind parameters are understood, including that theycannot be placed within the argument lists of function calls, anywhere outsidethe FROM, or even within subqueries within the FROM clause - making the usageof bind parameters within SELECT statements impossible for all but the mostsimplistic statements.

For this reason, the mxODBC dialect uses the “native” mode by default only forINSERT, UPDATE, and DELETE statements, and uses the escaped string mode forall other statements.

This behavior can be controlled viaexecution_options() using thenative_odbc_execute flag with a value of True or False, where avalue of True will unconditionally use native bind parameters and a valueof False will unconditionally use string-escaped parameters.

pymssql

Support for the Microsoft SQL Server database via the pymssql driver.

DBAPI

Documentation and download information (if applicable) for pymssql is available at:http://pymssql.org/

Connecting

Connect String:

  1. mssql+pymssql://<username>:<password>@<freetds_name>/?charset=utf8

pymssql is a Python module that provides a Python DBAPI interface aroundFreeTDS. Compatible builds are available forLinux, MacOSX and Windows platforms.

Modern versions of this driver work very well with SQL Server andFreeTDS from Linux and is highly recommended.

zxjdbc

Support for the Microsoft SQL Server database via the zxJDBC for Jython driver.

Note

Jython is not supported by current versions of SQLAlchemy. Thezxjdbc dialect should be considered as experimental.

DBAPI

Drivers for this database are available at:http://jtds.sourceforge.net/

Connecting

Connect String:

  1. mssql+zxjdbc://user:pass@host:port/dbname[?key=value&key=value...]

AdoDBAPI

Support for the Microsoft SQL Server database via the adodbapi driver.

DBAPI

Documentation and download information (if applicable) for adodbapi is available at:http://adodbapi.sourceforge.net/

Connecting

Connect String:

  1. mssql+adodbapi://<username>:<password>@<dsnname>

Note

The adodbapi dialect is not implemented in SQLAlchemy versions 0.6 andabove at this time.