What’s New in SQLAlchemy 0.7?

About this Document

This document describes changes between SQLAlchemy version 0.6,last released May 5, 2012, and SQLAlchemy version 0.7,undergoing maintenance releases as of October, 2012.

Document date: July 27, 2011

Introduction

This guide introduces what’s new in SQLAlchemy version 0.7,and also documents changes which affect users migratingtheir applications from the 0.6 series of SQLAlchemy to 0.7.

To as great a degree as possible, changes are made in such away as to not break compatibility with applications builtfor 0.6. The changes that are necessarily not backwardscompatible are very few, and all but one, the change tomutable attribute defaults, should affect an exceedinglysmall portion of applications - many of the changes regardnon-public APIs and undocumented hacks some users may havebeen attempting to use.

A second, even smaller class of non-backwards-compatiblechanges is also documented. This class of change regardsthose features and behaviors that have been deprecated atleast since version 0.5 and have been raising warnings sincetheir deprecation. These changes would only affectapplications that are still using 0.4- or early 0.5-styleAPIs. As the project matures, we have fewer and fewer ofthese kinds of changes with 0.x level releases, which is aproduct of our API having ever fewer features that are lessthan ideal for the use cases they were meant to solve.

An array of existing functionalities have been superseded inSQLAlchemy 0.7. There’s not much difference between theterms “superseded” and “deprecated”, except that the formerhas a much weaker suggestion of the old feature would everbe removed. In 0.7, features like synonym andcomparable_property, as well as all the Extensionand other event classes, have been superseded. But these“superseded” features have been re-implemented such thattheir implementations live mostly outside of core ORM code,so their continued “hanging around” doesn’t impactSQLAlchemy’s ability to further streamline and refine itsinternals, and we expect them to remain within the API forthe foreseeable future.

New Features

New Event System

SQLAlchemy started early with the MapperExtension class,which provided hooks into the persistence cycle of mappers.As SQLAlchemy quickly became more componentized, pushingmappers into a more focused configurational role, many more“extension”, “listener”, and “proxy” classes popped up tosolve various activity-interception use cases in an ad-hocfashion. Part of this was driven by the divergence ofactivities; ConnectionProxy objects wanted to provide asystem of rewriting statements and parameters;AttributeExtension provided a system of replacingincoming values, and DDL objects had events that couldbe switched off of dialect-sensitive callables.

0.7 re-implements virtually all of these plugin points witha new, unified approach, which retains all thefunctionalities of the different systems, provides moreflexibility and less boilerplate, performs better, andeliminates the need to learn radically different APIs foreach event subsystem. The pre-existing classesMapperExtension, SessionExtension,AttributeExtension, ConnectionProxy,PoolListener as well as the DDLElement.execute_atmethod are deprecated and now implemented in terms of thenew system - these APIs remain fully functional and areexpected to remain in place for the foreseeable future.

The new approach uses named events and user-definedcallables to associate activities with events. The API’slook and feel was driven by such diverse sources as JQuery,Blinker, and Hibernate, and was also modified further onseveral occasions during conferences with dozens of users onTwitter, which appears to have a much higher response ratethan the mailing list for such questions.

It also features an open-ended system of targetspecification that allows events to be associated with APIclasses, such as for all Session or Engine objects,with specific instances of API classes, such as for aspecific Pool or Mapper, as well as for relatedobjects like a user- defined class that’s mapped, orsomething as specific as a certain attribute on instances ofa particular subclass of a mapped parent class. Individuallistener subsystems can apply wrappers to incoming user-defined listener functions which modify how they are called- an mapper event can receive either the instance of theobject being operated upon, or its underlyingInstanceState object. An attribute event can opt whetheror not to have the responsibility of returning a new value.

Several systems now build upon the new event API, includingthe new “mutable attributes” API as well as compositeattributes. The greater emphasis on events has also led tothe introduction of a handful of new events, includingattribute expiration and refresh operations, pickleloads/dumps operations, completed mapper constructionoperations.

See also

Events

#1902

Hybrid Attributes, implements/supersedes synonym(), comparable_property()

The “derived attributes” example has now been turned into anofficial extension. The typical use case for synonym()is to provide descriptor access to a mapped column; the usecase for comparable_property() is to be able to return aPropComparator from any descriptor. In practice, theapproach of “derived” is easier to use, more extensible, isimplemented in a few dozen lines of pure Python with almostno imports, and doesn’t require the ORM core to even beaware of it. The feature is now known as the “HybridAttributes” extension.

synonym() and comparable_property() are still partof the ORM, though their implementations have been movedoutwards, building on an approach that is similar to that ofthe hybrid extension, so that the core ORMmapper/query/property modules aren’t really aware of themotherwise.

See also

Hybrid Attributes

#1903

Speed Enhancements

As is customary with all major SQLA releases, a wide passthrough the internals to reduce overhead and callcounts hasbeen made which further reduces the work needed in commonscenarios. Highlights of this release include:

  • The flush process will now bundle INSERT statements intobatches fed to cursor.executemany(), for rows wherethe primary key is already present. In particular thisusually applies to the “child” table on a joined tableinheritance configuration, meaning the number of calls tocursor.execute for a large bulk insert of joined-table objects can be cut in half, allowing native DBAPIoptimizations to take place for those statements passedto cursor.executemany() (such as re-using a preparedstatement).

  • The codepath invoked when accessing a many-to-onereference to a related object that’s already loaded hasbeen greatly simplified. The identity map is checkeddirectly without the need to generate a new Queryobject first, which is expensive in the context ofthousands of in-memory many-to-ones being accessed. Theusage of constructed-per-call “loader” objects is also nolonger used for the majority of lazy attribute loads.

  • The rewrite of composites allows a shorter codepath whenmapper internals access mapped attributes within aflush.

  • New inlined attribute access functions replace theprevious usage of “history” when the “save-update” andother cascade operations need to cascade among the fullscope of datamembers associated with an attribute. Thisreduces the overhead of generating a new Historyobject for this speed-critical operation.

  • The internals of the ExecutionContext, the objectcorresponding to a statement execution, have beeninlined and simplified.

  • The bind_processor() and result_processor()callables generated by types for each statementexecution are now cached (carefully, so as to avoid memoryleaks for ad-hoc types and dialects) for the lifespan ofthat type, further reducing per-statement call overhead.

  • The collection of “bind processors” for a particularCompiled instance of a statement is also cached onthe Compiled object, taking further advantage of the“compiled cache” used by the flush process to re-use thesame compiled form of INSERT, UPDATE, DELETE statements.

A demonstration of callcount reduction including a samplebenchmark script is athttp://techspot.zzzeek.org/2010/12/12/a-tale-of-three-profiles/

Composites Rewritten

The “composite” feature has been rewritten, likesynonym() and comparable_property(), to use alighter weight implementation based on descriptors andevents, rather than building into the ORM internals. Thisallowed the removal of some latency from the mapper/unit ofwork internals, and simplifies the workings of composite.The composite attribute now no longer conceals theunderlying columns it builds upon, which now remain asregular attributes. Composites can also act as a proxy forrelationship() as well as Column() attributes.

The major backwards-incompatible change of composites isthat they no longer use the mutable=True system todetect in-place mutations. Please use the MutationTracking extension to establish in-place change eventsto existing composite usage.

See also

Composite Column Types

Mutation Tracking

#2008#2024

More succinct form of query.join(target, onclause)

The default method of issuing query.join() to a targetwith an explicit onclause is now:

  1. query.join(SomeClass, SomeClass.id==ParentClass.some_id)

In 0.6, this usage was considered to be an error, becausejoin() accepts multiple arguments corresponding tomultiple JOIN clauses - the two-argument form needed to bein a tuple to disambiguate between single-argument and two-argument join targets. In the middle of 0.6 we addeddetection and an error message for this specific callingstyle, since it was so common. In 0.7, since we aredetecting the exact pattern anyway, and since having to typeout a tuple for no reason is extremely annoying, the non-tuple method now becomes the “normal” way to do it. The“multiple JOIN” use case is exceedingly rare compared to thesingle join case, and multiple joins these days are moreclearly represented by multiple calls to join().

The tuple form will remain for backwards compatibility.

Note that all the other forms of query.join() remainunchanged:

  1. query.join(MyClass.somerelation)
  2. query.join("somerelation")
  3. query.join(MyTarget)
  4. # ... etc

Querying with Joins

#1923

Mutation event extension, supersedes “mutable=True”

A new extension, Mutation Tracking, provides amechanism by which user-defined datatypes can provide changeevents back to the owning parent or parents. The extensionincludes an approach for scalar database values, such asthose managed by PickleType, postgresql.ARRAY, orother custom MutableType classes, as well as an approachfor ORM “composites”, those configured using composite().

See also

Mutation Tracking

NULLS FIRST / NULLS LAST operators

These are implemented as an extension to the asc() anddesc() operators, called nullsfirst() andnullslast().

See also

nullsfirst()

nullslast()

#723

select.distinct(), query.distinct() accepts *args for PostgreSQL DISTINCT ON

This was already available by passing a list of expressionsto the distinct keyword argument of select(), thedistinct() method of select() and Query nowaccept positional arguments which are rendered as DISTINCTON when a PostgreSQL backend is used.

distinct()

Query.distinct()

#1069

Index() can be placed inline inside of Table, table_args

The Index() construct can be created inline with a Tabledefinition, using strings as column names, as an alternativeto the creation of the index outside of the Table. That is:

  1. Table('mytable', metadata,
  2. Column('id',Integer, primary_key=True),
  3. Column('name', String(50), nullable=False),
  4. Index('idx_name', 'name')
  5. )

The primary rationale here is for the benefit of declarativetable_args, particularly when used with mixins:

  1. class HasNameMixin(object):
  2. name = Column('name', String(50), nullable=False)
  3. @declared_attr
  4. def __table_args__(cls):
  5. return (Index('name'), {})
  6.  
  7. class User(HasNameMixin, Base):
  8. __tablename__ = 'user'
  9. id = Column('id', Integer, primary_key=True)

Indexes

Window Function SQL Construct

A “window function” provides to a statement informationabout the result set as it’s produced. This allows criteriaagainst various things like “row number”, “rank” and soforth. They are known to be supported at least byPostgreSQL, SQL Server and Oracle, possibly others.

The best introduction to window functions is on PostgreSQL’ssite, where window functions have been supported sinceversion 8.4:

http://www.postgresql.org/docs/9.0/static/tutorial-window.html

SQLAlchemy provides a simple construct typically invoked viaan existing function clause, using the over() method,which accepts order_by and partition_by keywordarguments. Below we replicate the first example in PG’stutorial:

  1. from sqlalchemy.sql import table, column, select, func
  2.  
  3. empsalary = table('empsalary',
  4. column('depname'),
  5. column('empno'),
  6. column('salary'))
  7.  
  8. s = select([
  9. empsalary,
  10. func.avg(empsalary.c.salary).
  11. over(partition_by=empsalary.c.depname).
  12. label('avg')
  13. ])
  14.  
  15. print(s)

SQL:

  1. SELECT empsalary.depname, empsalary.empno, empsalary.salary,
  2. avg(empsalary.salary) OVER (PARTITION BY empsalary.depname) AS avg
  3. FROM empsalary

sqlalchemy.sql.expression.over

#1844

execution_options() on Connection accepts “isolation_level” argument

This sets the transaction isolation level for a singleConnection, until that Connection is closed and itsunderlying DBAPI resource returned to the connection pool,upon which the isolation level is reset back to the default.The default isolation level is set using theisolation_level argument to create_engine().

Transaction isolation support is currently only supported bythe PostgreSQL and SQLite backends.

execution_options()

#2001

TypeDecorator works with integer primary key columns

A TypeDecorator which extends the behavior ofInteger can be used with a primary key column. The“autoincrement” feature of Column will now recognizethat the underlying database column is still an integer sothat lastrowid mechanisms continue to function. TheTypeDecorator itself will have its result valueprocessor applied to newly generated primary keys, includingthose received by the DBAPI cursor.lastrowid accessor.

#2005#2006

TypeDecorator is present in the “sqlalchemy” import space

No longer need to import this from sqlalchemy.types,it’s now mirrored in sqlalchemy.

New Dialects

Dialects have been added:

  • a MySQLdb driver for the Drizzle database:

Drizzle

  • support for the pymysql DBAPI:

pymsql Notes

  • psycopg2 now works with Python 3

Behavioral Changes (Backwards Compatible)

C Extensions Build by Default

This is as of 0.7b4. The exts will build if cPython 2.xxis detected. If the build fails, such as on a windowsinstall, that condition is caught and the non-C installproceeds. The C exts won’t build if Python 3 or PyPy isused.

Query.count() simplified, should work virtually always

The very old guesswork which occurred withinQuery.count() has been modernized to use.from_self(). That is, query.count() is nowequivalent to:

  1. query.from_self(func.count(literal_column('1'))).scalar()

Previously, internal logic attempted to rewrite the columnsclause of the query itself, and upon detection of a“subquery” condition, such as a column-based query thatmight have aggregates in it, or a query with DISTINCT, wouldgo through a convoluted process of rewriting the columnsclause. This logic failed in complex conditions,particularly those involving joined table inheritance, andwas long obsolete by the more comprehensive .from_self()call.

The SQL emitted by query.count() is now always of theform:

  1. SELECT count(1) AS count_1 FROM (
  2. SELECT user.id AS user_id, user.name AS user_name from user
  3. ) AS anon_1

that is, the original query is preserved entirely inside ofa subquery, with no more guessing as to how count should beapplied.

#2093

To emit a non-subquery form of count()

MySQL users have already reported that the MyISAM engine notsurprisingly falls over completely with this simple change.Note that for a simple count() that optimizes for DBsthat can’t handle simple subqueries, func.count() shouldbe used:

  1. from sqlalchemy import func
  2. session.query(func.count(MyClass.id)).scalar()

or for count(*):

  1. from sqlalchemy import func, literal_column
  2. session.query(func.count(literal_column('*'))).select_from(MyClass).scalar()

LIMIT/OFFSET clauses now use bind parameters

The LIMIT and OFFSET clauses, or their backend equivalents(i.e. TOP, ROW NUMBER OVER, etc.), use bind parameters forthe actual values, for all backends which support it (mostexcept for Sybase). This allows better query optimizerperformance as the textual string for multiple statementswith differing LIMIT/OFFSET are now identical.

#805

Logging enhancements

Vinay Sajip has provided a patch to our logging system suchthat the “hex string” embedded in logging statements forengines and pools is no longer needed to allow the echoflag to work correctly. A new system that uses filteredlogging objects allows us to maintain our current behaviorof echo being local to individual engines without theneed for additional identifying strings local to thoseengines.

#1926

Simplified polymorphic_on assignment

The population of the polymorphicon column-mappedattribute, when used in an inheritance scenario, now occurswhen the object is constructed, i.e. its _init methodis called, using the init event. The attribute then behavesthe same as any other column-mapped attribute. Previously,special logic would fire off during flush to populate thiscolumn, which prevented any user code from modifying itsbehavior. The new approach improves upon this in threeways: 1. the polymorphic identity is now present on theobject as soon as its constructed; 2. the polymorphicidentity can be changed by user code without any differencein behavior from any other column-mapped attribute; 3. theinternals of the mapper during flush are simplified and nolonger need to make special checks for this column.

#1895

contains_eager() chains across multiple paths (i.e. “all()”)

The contains_eager()``</code> modifier now will chain itselffor a longer path without the need to emit individual<code>``contains_eager() calls. Instead of:

  1. session.query(A).options(contains_eager(A.b), contains_eager(A.b, B.c))

you can say:

  1. session.query(A).options(contains_eager(A.b, B.c))

#2032

Flushing of orphans that have no parent is allowed

We’ve had a long standing behavior that checks for a so-called “orphan” during flush, that is, an object which isassociated with a relationship() that specifies “delete-orphan” cascade, has been newly added to the session for anINSERT, and no parent relationship has been established.This check was added years ago to accommodate some testcases which tested the orphan behavior for consistency. Inmodern SQLA, this check is no longer needed on the Pythonside. The equivalent behavior of the “orphan check” isaccomplished by making the foreign key reference to theobject’s parent row NOT NULL, where the database does itsjob of establishing data consistency in the same way SQLAallows most other operations to do. If the object’s parentforeign key is nullable, then the row can be inserted. The“orphan” behavior runs when the object was persisted with aparticular parent, and is then disassociated with thatparent, leading to a DELETE statement emitted for it.

#1912

Warnings generated when collection members, scalar referents not part of the flush

Warnings are now emitted when related objects referenced viaa loaded relationship() on a parent object marked as“dirty” are not present in the current Session.

The save-update cascade takes effect when objects areadded to the Session, or when objects are firstassociated with a parent, so that an object and everythingrelated to it are usually all present in the sameSession. However, if save-update cascade isdisabled for a particular relationship(), then thisbehavior does not occur, and the flush process does not tryto correct for it, instead staying consistent to theconfigured cascade behavior. Previously, when such objectswere detected during the flush, they were silently skipped.The new behavior is that a warning is emitted, for thepurposes of alerting to a situation that more often than notis the source of unexpected behavior.

#1973

Setup no longer installs a Nose plugin

Since we moved to nose we’ve used a plugin that installs viasetuptools, so that the nosetests script wouldautomatically run SQLA’s plugin code, necessary for ourtests to have a full environment. In the middle of 0.6, werealized that the import pattern here meant that Nose’s“coverage” plugin would break, since “coverage” requiresthat it be started before any modules to be covered areimported; so in the middle of 0.6 we made the situationworse by adding a separate sqlalchemy-nose package tothe build to overcome this.

In 0.7 we’ve done away with trying to get nosetests towork automatically, since the SQLAlchemy module wouldproduce a large number of nose configuration options for allusages of nosetests, not just the SQLAlchemy unit teststhemselves, and the additional sqlalchemy-nose installwas an even worse idea, producing an extra package in Pythonenvironments. The sqla_nose.py script in 0.7 is nowthe only way to run the tests with nose.

#1949

Non-Table-derived constructs can be mapped

A construct that isn’t against any Table at all, like afunction, can be mapped.

  1. from sqlalchemy import select, func
  2. from sqlalchemy.orm import mapper
  3.  
  4. class Subset(object):
  5. pass
  6. selectable = select(["x", "y", "z"]).select_from(func.some_db_function()).alias()
  7. mapper(Subset, selectable, primary_key=[selectable.c.x])

#1876

aliased() accepts FromClause elements

This is a convenience helper such that in the case a plainFromClause, such as a select, Table or joinis passed to the orm.aliased() construct, it passesthrough to the .alias() method of that from constructrather than constructing an ORM level AliasedClass.

#2018

Session.connection(), Session.execute() accept ‘bind’

This is to allow execute/connection operations toparticipate in the open transaction of an engine explicitly.It also allows custom subclasses of Session thatimplement their own get_bind() method and arguments touse those custom arguments with both the execute() andconnection() methods equally.

Session.connectionSession.execute

#1996

Standalone bind parameters in columns clause auto-labeled.

Bind parameters present in the “columns clause” of a selectare now auto-labeled like other “anonymous” clauses, whichamong other things allows their “type” to be meaningful whenthe row is fetched, as in result row processors.

SQLite - relative file paths are normalized through os.path.abspath()

This so that a script that changes the current directorywill continue to target the same location as subsequentSQLite connections are established.

#2036

MS-SQL - String/Unicode/VARCHAR/NVARCHAR/VARBINARY emit “max” for no length

On the MS-SQL backend, the String/Unicode types, and theircounterparts VARCHAR/ NVARCHAR, as well as VARBINARY(#1833) emit “max” as the length when no length isspecified. This makes it more compatible with PostgreSQL’sVARCHAR type which is similarly unbounded when no lengthspecified. SQL Server defaults the length on these typesto ‘1’ when no length is specified.

Behavioral Changes (Backwards Incompatible)

Note again, aside from the default mutability change, mostof these changes are extremely minor and will not affectmost users.

PickleType and ARRAY mutability turned off by default

This change refers to the default behavior of the ORM whenmapping columns that have either the PickleType orpostgresql.ARRAY datatypes. The mutable flag is nowset to False by default. If an existing application usesthese types and depends upon detection of in-placemutations, the type object must be constructed withmutable=True to restore the 0.6 behavior:

  1. Table('mytable', metadata,
  2. # ....
  3.  
  4. Column('pickled_data', PickleType(mutable=True))
  5. )

The mutable=True flag is being phased out, in favor ofthe new Mutation Tracking extension. This extensionprovides a mechanism by which user-defined datatypes canprovide change events back to the owning parent or parents.

The previous approach of using mutable=True does notprovide for change events - instead, the ORM must scanthrough all mutable values present in a session and comparethem against their original value for changes every timeflush() is called, which is a very time consuming event.This is a holdover from the very early days of SQLAlchemywhen flush() was not automatic and the history trackingsystem was not nearly as sophisticated as it is now.

Existing applications which use PickleType,postgresql.ARRAY or other MutableType subclasses,and require in-place mutation detection, should migrate tothe new mutation tracking system, as mutable=True islikely to be deprecated in the future.

#1980

Mutability detection of composite() requires the Mutation Tracking Extension

So-called “composite” mapped attributes, those configuredusing the technique described at Composite Column Types, have been re-implemented suchthat the ORM internals are no longer aware of them (leadingto shorter and more efficient codepaths in criticalsections). While composite types are generally intended tobe treated as immutable value objects, this was neverenforced. For applications that use composites withmutability, the Mutation Tracking extension offers abase class which establishes a mechanism for user-definedcomposite types to send change event messages back to theowning parent or parents of each object.

Applications which use composite types and rely upon in-place mutation detection of these objects should eithermigrate to the “mutation tracking” extension, or change theusage of the composite types such that in-place changes areno longer needed (i.e., treat them as immutable valueobjects).

SQLite - the SQLite dialect now uses NullPool for file-based databases

This change is 99.999% backwards compatible, unless youare using temporary tables across connection poolconnections.

A file-based SQLite connection is blazingly fast, and usingNullPool means that each call to Engine.connectcreates a new pysqlite connection.

Previously, the SingletonThreadPool was used, whichmeant that all connections to a certain engine in a threadwould be the same connection. It’s intended that the newapproach is more intuitive, particularly when multipleconnections are used.

SingletonThreadPool is still the default engine when a:memory: database is used.

Note that this change breaks temporary tables used acrossSession commits, due to the way SQLite handles temptables. See the note athttp://www.sqlalchemy.org/docs/dialects/sqlite.html#using-temporary-tables-with-sqlite if temporary tables beyond thescope of one pool connection are desired.

#1921

Session.merge() checks version ids for versioned mappers

Session.merge() will check the version id of the incomingstate against that of the database, assuming the mappinguses version ids and incoming state has a version_idassigned, and raise StaleDataError if they don’t match.This is the correct behavior, in that if incoming statecontains a stale version id, it should be assumed the stateis stale.

If merging data into a versioned state, the version idattribute can be left undefined, and no version check willtake place.

This check was confirmed by examining what Hibernate does -both the merge() and the versioning features wereoriginally adapted from Hibernate.

#2027

Tuple label names in Query Improved

This improvement is potentially slightly backwardsincompatible for an application that relied upon the oldbehavior.

Given two mapped classes Foo and Bar each with acolumn spam:

  1. qa = session.query(Foo.spam)
  2. qb = session.query(Bar.spam)
  3.  
  4. qu = qa.union(qb)

The name given to the single column yielded by qu willbe spam. Previously it would be something likefoo_spam due to the way the union would combinethings, which is inconsistent with the name spam in thecase of a non-unioned query.

#1942

Mapped column attributes reference the most specific column first

This is a change to the behavior involved when a mappedcolumn attribute references multiple columns, specificallywhen dealing with an attribute on a joined-table subclassthat has the same name as that of an attribute on thesuperclass.

Using declarative, the scenario is this:

  1. class Parent(Base):
  2. __tablename__ = 'parent'
  3. id = Column(Integer, primary_key=True)
  4.  
  5. class Child(Parent):
  6. __tablename__ = 'child'
  7. id = Column(Integer, ForeignKey('parent.id'), primary_key=True)

Above, the attribute Child.id refers to both thechild.id column as well as parent.id - this due tothe name of the attribute. If it were named differently onthe class, such as Child.child_id, it then mapsdistinctly to child.id, with Child.id being the sameattribute as Parent.id.

When the id attribute is made to reference bothparent.id and child.id, it stores them in an orderedlist. An expression such as Child.id then refers tojust one of those columns when rendered. Up until 0.6,this column would be parent.id. In 0.7, it is the lesssurprising child.id.

The legacy of this behavior deals with behaviors andrestrictions of the ORM that don’t really apply anymore; allthat was needed was to reverse the order.

A primary advantage of this approach is that it’s now easierto construct primaryjoin expressions that refer to thelocal column:

  1. class Child(Parent):
  2. __tablename__ = 'child'
  3. id = Column(Integer, ForeignKey('parent.id'), primary_key=True)
  4. some_related = relationship("SomeRelated",
  5. primaryjoin="Child.id==SomeRelated.child_id")
  6.  
  7. class SomeRelated(Base):
  8. __tablename__ = 'some_related'
  9. id = Column(Integer, primary_key=True)
  10. child_id = Column(Integer, ForeignKey('child.id'))

Prior to 0.7 the Child.id expression would referenceParent.id, and it would be necessary to map child.idto a distinct attribute.

It also means that a query like this one changes itsbehavior:

  1. session.query(Parent).filter(Child.id > 7)

In 0.6, this would render:

  1. SELECT parent.id AS parent_id
  2. FROM parent
  3. WHERE parent.id > :id_1

in 0.7, you get:

  1. SELECT parent.id AS parent_id
  2. FROM parent, child
  3. WHERE child.id > :id_1

which you’ll note is a cartesian product - this behavior isnow equivalent to that of any other attribute that is localto Child. The with_polymorphic() method, or asimilar strategy of explicitly joining the underlyingTable objects, is used to render a query against allParent objects with criteria against Child, in thesame manner as that of 0.5 and 0.6:

  1. print(s.query(Parent).with_polymorphic([Child]).filter(Child.id > 7))

Which on both 0.6 and 0.7 renders:

  1. SELECT parent.id AS parent_id, child.id AS child_id
  2. FROM parent LEFT OUTER JOIN child ON parent.id = child.id
  3. WHERE child.id > :id_1

Another effect of this change is that a joined-inheritanceload across two tables will populate from the child table’svalue, not that of the parent table. An unusual case is thata query against “Parent” using withpolymorphic="*"issues a query against “parent”, with a LEFT OUTER JOIN to“child”. The row is located in “Parent”, sees thepolymorphic identity corresponds to “Child”, but suppose theactual row in “child” has been _deleted. Due to thiscorruption, the row comes in with all the columnscorresponding to “child” set to NULL - this is now the valuethat gets populated, not the one in the parent table.

#1892

Mapping to joins with two or more same-named columns requires explicit declaration

This is somewhat related to the previous change in#1892. When mapping to a join, same-named columnsmust be explicitly linked to mapped attributes, i.e. asdescribed in Mapping a Class Against Multiple Tables.

Given two tables foo and bar, each with a primarykey column id, the following now produces an error:

  1. foobar = foo.join(bar, foo.c.id==bar.c.foo_id)
  2. mapper(FooBar, foobar)

This because the mapper() refuses to guess what columnis the primary representation of FooBar.id - is itfoo.c.id or is it bar.c.id ? The attribute must beexplicit:

  1. foobar = foo.join(bar, foo.c.id==bar.c.foo_id)
  2. mapper(FooBar, foobar, properties={
  3. 'id':[foo.c.id, bar.c.id]
  4. })

#1896

Mapper requires that polymorphic_on column be present in the mapped selectable

This is a warning in 0.6, now an error in 0.7. The columngiven for polymorphic_on must be in the mappedselectable. This to prevent some occasional user errorssuch as:

  1. mapper(SomeClass, sometable, polymorphic_on=some_lookup_table.c.id)

where above the polymorphic_on needs to be on asometable column, in this case perhapssometable.c.some_lookup_id. There are also some“polymorphic union” scenarios where similar mistakessometimes occur.

Such a configuration error has always been “wrong”, and theabove mapping doesn’t work as specified - the column wouldbe ignored. It is however potentially backwardsincompatible in the rare case that an application has beenunknowingly relying upon this behavior.

#1875

DDL() constructs now escape percent signs

Previously, percent signs in DDL() strings would have tobe escaped, i.e. %% depending on DBAPI, for those DBAPIsthat accept pyformat or format binds (i.e. psycopg2,mysql-python), which was inconsistent versus text()constructs which did this automatically. The same escapingnow occurs for DDL() as for text().

#1897

Table.c / MetaData.tables refined a bit, don’t allow direct mutation

Another area where some users were tinkering around in sucha way that doesn’t actually work as expected, but still leftan exceedingly small chance that some application wasrelying upon this behavior, the construct returned by the.c attribute on Table and the .tables attributeon MetaData is explicitly non-mutable. The “mutable”version of the construct is now private. Adding columns to.c involves using the append_column() method ofTable, which ensures things are associated with theparent Table in the appropriate way; similarly,MetaData.tables has a contract with the Tableobjects stored in this dictionary, as well as a little bitof new bookkeeping in that a set() of all schema namesis tracked, which is satisfied only by using the publicTable constructor as well as Table.tometadata().

It is of course possible that the ColumnCollection anddict collections consulted by these attributes couldsomeday implement events on all of their mutational methodssuch that the appropriate bookkeeping occurred upon directmutation of the collections, but until someone has themotivation to implement all that along with dozens of newunit tests, narrowing the paths to mutation of thesecollections will ensure no application is attempting to relyupon usages that are currently not supported.

#1893#1917

server_default consistently returns None for all inserted_primary_key values

Established consistency when server_default is present on anInteger PK column. SQLA doesn’t pre-fetch these, nor do theycome back in cursor.lastrowid (DBAPI). Ensured all backendsconsistently return None in result.inserted_primary_key forthese - some backends may have returned a value previously.Using a server_default on a primary key column is extremelyunusual. If a special function or SQL expression is usedto generate primary key defaults, this should be establishedas a Python-side “default” instead of server_default.

Regarding reflection for this case, reflection of an int PKcol with a server_default sets the “autoincrement” flag toFalse, except in the case of a PG SERIAL col where wedetected a sequence default.

#2020#2021

The sqlalchemy.exceptions alias in sys.modules is removed

For a few years we’ve added the stringsqlalchemy.exceptions to sys.modules, so that astatement like “import sqlalchemy.exceptions” wouldwork. The name of the core exceptions module has beenexc for a long time now, so the recommended import forthis module is:

  1. from sqlalchemy import exc

The exceptions name is still present in “sqlalchemy”for applications which might have said from sqlalchemyimport exceptions, but they should also start using theexc name.

Query Timing Recipe Changes

While not part of SQLAlchemy itself, it’s worth mentioningthat the rework of the ConnectionProxy into the newevent system means it is no longer appropriate for the“Timing all Queries” recipe. Please adjust query-timers touse the before_cursor_execute() andafter_cursor_execute() events, demonstrated in theupdated recipe UsageRecipes/Profiling.

Deprecated API

Default constructor on types will not accept arguments

Simple types like Integer, Date etc. in the coretypes module don’t accept arguments. The defaultconstructor that accepts/ignores a catchall *args,**kwargs is restored as of 0.7b4/0.7.0, but emits adeprecation warning.

If arguments are being used with a core type likeInteger, it may be that you intended to use a dialectspecific type, such as sqlalchemy.dialects.mysql.INTEGERwhich does accept a “display_width” argument for example.

compile_mappers() renamed configure_mappers(), simplified configuration internals

This system slowly morphed from something small, implementedlocal to an individual mapper, and poorly named intosomething that’s more of a global “registry-” level functionand poorly named, so we’ve fixed both by moving theimplementation out of Mapper altogether and renaming itto configure_mappers(). It is of course normally notneeded for an application to call configure_mappers() asthis process occurs on an as-needed basis, as soon as themappings are needed via attribute or query access.

#1966

Core listener/proxy superseded by event listeners

PoolListener, ConnectionProxy,DDLElement.execute_at are superseded byevent.listen(), using the PoolEvents,EngineEvents, DDLEvents dispatch targets,respectively.

ORM extensions superseded by event listeners

MapperExtension, AttributeExtension,SessionExtension are superseded by event.listen(),using the MapperEvents/InstanceEvents,AttributeEvents, SessionEvents, dispatch targets,respectively.

Sending a string to ‘distinct’ in select() for MySQL should be done via prefixes

This obscure feature allows this pattern with the MySQLbackend:

  1. select([mytable], distinct='ALL', prefixes=['HIGH_PRIORITY'])

The prefixes keyword or prefix_with() method shouldbe used for non-standard or unusual prefixes:

  1. select([mytable]).prefix_with('HIGH_PRIORITY', 'ALL')

useexisting superseded by extend_existing and keep_existing

The useexisting flag on Table has been superseded by anew pair of flags keep_existing and extend_existing.extend_existing is equivalent to useexisting - theexisting Table is returned, and additional constructorelements are added. With keep_existing, the existingTable is returned, but additional constructor elements arenot added - these elements are only applied when the Tableis newly created.

Backwards Incompatible API Changes

Callables passed to bindparam() don’t get evaluated - affects the Beaker example

#1950

Note this affects the Beaker caching example, where theworkings of the _params_from_query() function needed aslight adjustment. If you’re using code from the Beakerexample, this change should be applied.

types.type_map is now private, types._type_map

We noticed some users tapping into this dictionary inside ofsqlalchemy.types as a shortcut to associating Pythontypes with SQL types. We can’t guarantee the contents orformat of this dictionary, and additionally the business ofassociating Python types in a one-to-one fashion has somegrey areas that should are best decided by individualapplications, so we’ve underscored this attribute.

#1870

Renamed the alias keyword arg of standalone alias() function to name

This so that the keyword argument name matches that ofthe alias() methods on all FromClause objects aswell as the name argument on Query.subquery().

Only code that uses the standalone alias() function, andnot the method bound functions, and passes the alias nameusing the explicit keyword name alias, and notpositionally, would need modification here.

Non-public Pool methods underscored

All methods of Pool and subclasses which are notintended for public use have been renamed with underscores.That they were not named this way previously was a bug.

Pooling methods now underscored or removed:

Pool.create_connection() ->Pool._create_connection()

Pool.do_get() -> Pool._do_get()

Pool.do_return_conn() -> Pool._do_return_conn()

Pool.do_return_invalid() -> removed, was not used

Pool.return_conn() -> Pool._return_conn()

Pool.get() -> Pool._get(), public API isPool.connect()

SingletonThreadPool.cleanup() -> _cleanup()

SingletonThreadPool.dispose_local() -> removed, useconn.invalidate()

#1982

Previously Deprecated, Now Removed

Query.join(), Query.outerjoin(), eagerload(), eagerload_all(), others no longer allow lists of attributes as arguments

Passing a list of attributes or attribute names toQuery.join, eagerload(), and similar has beendeprecated since 0.5:

  1. # old way, deprecated since 0.5
  2. session.query(Houses).join([Houses.rooms, Room.closets])
  3. session.query(Houses).options(eagerload_all([Houses.rooms, Room.closets]))

These methods all accept *args as of the 0.5 series:

  1. # current way, in place since 0.5
  2. session.query(Houses).join(Houses.rooms, Room.closets)
  3. session.query(Houses).options(eagerload_all(Houses.rooms, Room.closets))

ScopedSession.mapper is removed

This feature provided a mapper extension which linked class-based functionality with a particular ScopedSession, inparticular providing the behavior such that new objectinstances would be automatically associated with thatsession. The feature was overused by tutorials andframeworks which led to great user confusion due to itsimplicit behavior, and was deprecated in 0.5.5. Techniquesfor replicating its functionality are at[wiki:UsageRecipes/SessionAwareMapper]