What’s New in SQLAlchemy 0.8?

About this Document

This document describes changes between SQLAlchemy version 0.7,undergoing maintenance releases as of October, 2012,and SQLAlchemy version 0.8, which is expected for releasein early 2013.

Document date: October 25, 2012Updated: March 9, 2013

Introduction

This guide introduces what’s new in SQLAlchemy version 0.8,and also documents changes which affect users migratingtheir applications from the 0.7 series of SQLAlchemy to 0.8.

SQLAlchemy releases are closing in on 1.0, and each newversion since 0.5 features fewer major usage changes. Mostapplications that are settled into modern 0.7 patternsshould be movable to 0.8 with no changes. Applications thatuse 0.6 and even 0.5 patterns should be directly migratableto 0.8 as well, though larger applications may want to testwith each interim version.

Platform Support

Targeting Python 2.5 and Up Now

SQLAlchemy 0.8 will target Python 2.5 and forward;compatibility for Python 2.4 is being dropped.

The internals will be able to make usage of Python ternaries(that is, x if y else z) which will improve thingsversus the usage of y and x or z, which naturally hasbeen the source of some bugs, as well as context managers(that is, with:) and perhaps in some casestry:/except:/else: blocks which will help with codereadability.

SQLAlchemy will eventually drop 2.5 support as well - when2.6 is reached as the baseline, SQLAlchemy will move to use2.6/3.3 in-place compatibility, removing the usage of the2to3 tool and maintaining a source base that works withPython 2 and 3 at the same time.

New ORM Features

Rewritten relationship() mechanics

0.8 features a much improved and capable system regardinghow relationship() determines how to join between twoentities. The new system includes these features:

  • The primaryjoin argument is no longer needed whenconstructing a relationship() against a class thathas multiple foreign key paths to the target. Only theforeign_keys argument is needed to specify thosecolumns which should be included:
  1. class Parent(Base):
  2. __tablename__ = 'parent'
  3. id = Column(Integer, primary_key=True)
  4. child_id_one = Column(Integer, ForeignKey('child.id'))
  5. child_id_two = Column(Integer, ForeignKey('child.id'))
  6.  
  7. child_one = relationship("Child", foreign_keys=child_id_one)
  8. child_two = relationship("Child", foreign_keys=child_id_two)
  9.  
  10. class Child(Base):
  11. __tablename__ = 'child'
  12. id = Column(Integer, primary_key=True)
  • relationships against self-referential, composite foreignkeys where a column points to itself are nowsupported. The canonical case is as follows:
  1. class Folder(Base):
  2. __tablename__ = 'folder'
  3. __table_args__ = (
  4. ForeignKeyConstraint(
  5. ['account_id', 'parent_id'],
  6. ['folder.account_id', 'folder.folder_id']),
  7. )
  8.  
  9. account_id = Column(Integer, primary_key=True)
  10. folder_id = Column(Integer, primary_key=True)
  11. parent_id = Column(Integer)
  12. name = Column(String)
  13.  
  14. parent_folder = relationship("Folder",
  15. backref="child_folders",
  16. remote_side=[account_id, folder_id]
  17. )

Above, the Folder refers to its parent Folderjoining from accountid to itself, and parent_idto folder_id. When SQLAlchemy constructs an auto-join, no longer can it assume all columns on the “remote”side are aliased, and all columns on the “local” side arenot - the account_id column is on both sides. Sothe internal relationship mechanics were totally rewrittento support an entirely different system whereby two copiesof account_id are generated, each containing different_annotations to determine their role within thestatement. Note the join condition within a basic eagerload:

  1. SELECT
  2. folder.account_id AS folder_account_id,
  3. folder.folder_id AS folder_folder_id,
  4. folder.parent_id AS folder_parent_id,
  5. folder.name AS folder_name,
  6. folder_1.account_id AS folder_1_account_id,
  7. folder_1.folder_id AS folder_1_folder_id,
  8. folder_1.parent_id AS folder_1_parent_id,
  9. folder_1.name AS folder_1_name
  10. FROM folder
  11. LEFT OUTER JOIN folder AS folder_1
  12. ON
  13. folder_1.account_id = folder.account_id
  14. AND folder.folder_id = folder_1.parent_id
  15.  
  16. WHERE folder.folder_id = ? AND folder.account_id = ?
  • Previously difficult custom join conditions, like those involvingfunctions and/or CASTing of types, will now function asexpected in most cases:
  1. class HostEntry(Base):
  2. __tablename__ = 'host_entry'
  3.  
  4. id = Column(Integer, primary_key=True)
  5. ip_address = Column(INET)
  6. content = Column(String(50))
  7.  
  8. # relationship() using explicit foreign_keys, remote_side
  9. parent_host = relationship("HostEntry",
  10. primaryjoin=ip_address == cast(content, INET),
  11. foreign_keys=content,
  12. remote_side=ip_address
  13. )

The new relationship() mechanics make use of aSQLAlchemy concept known as annotations. These annotationsare also available to application code explicitly viathe foreign() and remote() functions, eitheras a means to improve readability for advanced configurationsor to directly inject an exact configuration, bypassingthe usual join-inspection heuristics:

  1. from sqlalchemy.orm import foreign, remote
  2.  
  3. class HostEntry(Base):
  4. __tablename__ = 'host_entry'
  5.  
  6. id = Column(Integer, primary_key=True)
  7. ip_address = Column(INET)
  8. content = Column(String(50))
  9.  
  10. # relationship() using explicit foreign() and remote() annotations
  11. # in lieu of separate arguments
  12. parent_host = relationship("HostEntry",
  13. primaryjoin=remote(ip_address) == \
  14. cast(foreign(content), INET),
  15. )

See also

Configuring how Relationship Joins - a newly revised section on relationship()detailing the latest techniques for customizing related attributes and collectionaccess.

#1401#610

New Class/Object Inspection System

Lots of SQLAlchemy users are writing systems that requirethe ability to inspect the attributes of a mapped class,including being able to get at the primary key columns,object relationships, plain attributes, and so forth,typically for the purpose of building data-marshallingsystems, like JSON/XML conversion schemes and of course formlibraries galore.

Originally, the Table and Column model were theoriginal inspection points, which have a well-documentedsystem. While SQLAlchemy ORM models are also fullyintrospectable, this has never been a fully stable andsupported feature, and users tended to not have a clear ideahow to get at this information.

0.8 now provides a consistent, stable and fullydocumented API for this purpose, including an inspectionsystem which works on mapped classes, instances, attributes,and other Core and ORM constructs. The entrypoint to thissystem is the core-level inspect() function.In most cases, the object being inspectedis one already part of SQLAlchemy’s system,such as Mapper, InstanceState,Inspector. In some cases, new objects have beenadded with the job of providing the inspection API incertain contexts, such as AliasedInsp andAttributeState.

A walkthrough of some key capabilities follows:

  1. >>> class User(Base):
  2. ... __tablename__ = 'user'
  3. ... id = Column(Integer, primary_key=True)
  4. ... name = Column(String)
  5. ... name_syn = synonym(name)
  6. ... addresses = relationship("Address")
  7. ...
  8.  
  9. >>> # universal entry point is inspect()
  10. >>> b = inspect(User)
  11.  
  12. >>> # b in this case is the Mapper
  13. >>> b
  14. <Mapper at 0x101521950; User>
  15.  
  16. >>> # Column namespace
  17. >>> b.columns.id
  18. Column('id', Integer(), table=<user>, primary_key=True, nullable=False)
  19.  
  20. >>> # mapper's perspective of the primary key
  21. >>> b.primary_key
  22. (Column('id', Integer(), table=<user>, primary_key=True, nullable=False),)
  23.  
  24. >>> # MapperProperties available from .attrs
  25. >>> b.attrs.keys()
  26. ['name_syn', 'addresses', 'id', 'name']
  27.  
  28. >>> # .column_attrs, .relationships, etc. filter this collection
  29. >>> b.column_attrs.keys()
  30. ['id', 'name']
  31.  
  32. >>> list(b.relationships)
  33. [<sqlalchemy.orm.properties.RelationshipProperty object at 0x1015212d0>]
  34.  
  35. >>> # they are also namespaces
  36. >>> b.column_attrs.id
  37. <sqlalchemy.orm.properties.ColumnProperty object at 0x101525090>
  38.  
  39. >>> b.relationships.addresses
  40. <sqlalchemy.orm.properties.RelationshipProperty object at 0x1015212d0>
  41.  
  42. >>> # point inspect() at a mapped, class level attribute,
  43. >>> # returns the attribute itself
  44. >>> b = inspect(User.addresses)
  45. >>> b
  46. <sqlalchemy.orm.attributes.InstrumentedAttribute object at 0x101521fd0>
  47.  
  48. >>> # From here we can get the mapper:
  49. >>> b.mapper
  50. <Mapper at 0x101525810; Address>
  51.  
  52. >>> # the parent inspector, in this case a mapper
  53. >>> b.parent
  54. <Mapper at 0x101521950; User>
  55.  
  56. >>> # an expression
  57. >>> print(b.expression)
  58. "user".id = address.user_id
  59.  
  60. >>> # inspect works on instances
  61. >>> u1 = User(id=3, name='x')
  62. >>> b = inspect(u1)
  63.  
  64. >>> # it returns the InstanceState
  65. >>> b
  66. <sqlalchemy.orm.state.InstanceState object at 0x10152bed0>
  67.  
  68. >>> # similar attrs accessor refers to the
  69. >>> b.attrs.keys()
  70. ['id', 'name_syn', 'addresses', 'name']
  71.  
  72. >>> # attribute interface - from attrs, you get a state object
  73. >>> b.attrs.id
  74. <sqlalchemy.orm.state.AttributeState object at 0x10152bf90>
  75.  
  76. >>> # this object can give you, current value...
  77. >>> b.attrs.id.value
  78. 3
  79.  
  80. >>> # ... current history
  81. >>> b.attrs.id.history
  82. History(added=[3], unchanged=(), deleted=())
  83.  
  84. >>> # InstanceState can also provide session state information
  85. >>> # lets assume the object is persistent
  86. >>> s = Session()
  87. >>> s.add(u1)
  88. >>> s.commit()
  89.  
  90. >>> # now we can get primary key identity, always
  91. >>> # works in query.get()
  92. >>> b.identity
  93. (3,)
  94.  
  95. >>> # the mapper level key
  96. >>> b.identity_key
  97. (<class '__main__.User'>, (3,))
  98.  
  99. >>> # state within the session
  100. >>> b.persistent, b.transient, b.deleted, b.detached
  101. (True, False, False, False)
  102.  
  103. >>> # owning session
  104. >>> b.session
  105. <sqlalchemy.orm.session.Session object at 0x101701150>

See also

Runtime Inspection API

#2208

New with_polymorphic() feature, can be used anywhere

The Query.with_polymorphic() method allows the user tospecify which tables should be present when querying againsta joined-table entity. Unfortunately the method is awkwardand only applies to the first entity in the list, andotherwise has awkward behaviors both in usage as well aswithin the internals. A new enhancement to thealiased() construct has been added calledwith_polymorphic() which allows any entity to be“aliased” into a “polymorphic” version of itself, freelyusable anywhere:

  1. from sqlalchemy.orm import with_polymorphic
  2. palias = with_polymorphic(Person, [Engineer, Manager])
  3. session.query(Company).\
  4. join(palias, Company.employees).\
  5. filter(or_(Engineer.language=='java', Manager.hair=='pointy'))

See also

Using with_polymorphic - newly updated documentation for polymorphicloading control.

#2333

of_type() works with alias(), with_polymorphic(), any(), has(), joinedload(), subqueryload(), contains_eager()

The PropComparator.of_type() method is used to specifya specific subtype to use when constructing SQL expressions alonga relationship() that has a polymorphic mapping as its target.This method can now be used to target any number of target subtypes,by combining it with the new with_polymorphic() function:

  1. # use eager loading in conjunction with with_polymorphic targets
  2. Job_P = with_polymorphic(Job, [SubJob, ExtraJob], aliased=True)
  3. q = s.query(DataContainer).\
  4. join(DataContainer.jobs.of_type(Job_P)).\
  5. options(contains_eager(DataContainer.jobs.of_type(Job_P)))

The method now works equally well in most places a regular relationshipattribute is accepted, including with loader functions likejoinedload(), subqueryload(), contains_eager(),and comparison methods like PropComparator.any()and PropComparator.has():

  1. # use eager loading in conjunction with with_polymorphic targets
  2. Job_P = with_polymorphic(Job, [SubJob, ExtraJob], aliased=True)
  3. q = s.query(DataContainer).\
  4. join(DataContainer.jobs.of_type(Job_P)).\
  5. options(contains_eager(DataContainer.jobs.of_type(Job_P)))
  6.  
  7. # pass subclasses to eager loads (implicitly applies with_polymorphic)
  8. q = s.query(ParentThing).\
  9. options(
  10. joinedload_all(
  11. ParentThing.container,
  12. DataContainer.jobs.of_type(SubJob)
  13. ))
  14.  
  15. # control self-referential aliasing with any()/has()
  16. Job_A = aliased(Job)
  17. q = s.query(Job).join(DataContainer.jobs).\
  18. filter(
  19. DataContainer.jobs.of_type(Job_A).\
  20. any(and_(Job_A.id < Job.id, Job_A.type=='fred')
  21. )
  22. )

See also

of_type

#2438#1106

Events Can Be Applied to Unmapped Superclasses

Mapper and instance events can now be associated with an unmappedsuperclass, where those events will be propagated to subclassesas those subclasses are mapped. The propagate=True flagshould be used. This feature allows events to be associatedwith a declarative base class:

  1. from sqlalchemy.ext.declarative import declarative_base
  2.  
  3. Base = declarative_base()
  4.  
  5. @event.listens_for("load", Base, propagate=True)
  6. def on_load(target, context):
  7. print("New instance loaded:", target)
  8.  
  9. # on_load() will be applied to SomeClass
  10. class SomeClass(Base):
  11. __tablename__ = 'sometable'
  12.  
  13. # ...

#2585

Declarative Distinguishes Between Modules/Packages

A key feature of Declarative is the ability to referto other mapped classes using their string name. Theregistry of class names is now sensitive to the owningmodule and package of a given class. The classescan be referred to via dotted name in expressions:

  1. class Snack(Base):
  2. # ...
  3.  
  4. peanuts = relationship("nuts.Peanut",
  5. primaryjoin="nuts.Peanut.snack_id == Snack.id")

The resolution allows that any full or partialdisambiguating package name can be used. If thepath to a particular class is still ambiguous,an error is raised.

#2338

New DeferredReflection Feature in Declarative

The “deferred reflection” example has been moved to asupported feature within Declarative. This feature allowsthe construction of declarative mapped classes with onlyplaceholder Table metadata, until a prepare() stepis called, given an Engine with which to reflect fullyall tables and establish actual mappings. The systemsupports overriding of columns, single and joinedinheritance, as well as distinct bases-per-engine. A fulldeclarative configuration can now be created against anexisting table that is assembled upon engine creation timein one step:

  1. class ReflectedOne(DeferredReflection, Base):
  2. __abstract__ = True
  3.  
  4. class ReflectedTwo(DeferredReflection, Base):
  5. __abstract__ = True
  6.  
  7. class MyClass(ReflectedOne):
  8. __tablename__ = 'mytable'
  9.  
  10. class MyOtherClass(ReflectedOne):
  11. __tablename__ = 'myothertable'
  12.  
  13. class YetAnotherClass(ReflectedTwo):
  14. __tablename__ = 'yetanothertable'
  15.  
  16. ReflectedOne.prepare(engine_one)
  17. ReflectedTwo.prepare(engine_two)

See also

DeferredReflection

#2485

ORM Classes Now Accepted by Core Constructs

While the SQL expressions used with Query.filter(),such as User.id == 5, have always been compatible foruse with core constructs such as select(), the mappedclass itself would not be recognized when passed to select(),Select.select_from(), or Select.correlate().A new SQL registration system allows a mapped class to beaccepted as a FROM clause within the core:

  1. from sqlalchemy import select
  2.  
  3. stmt = select([User]).where(User.id == 5)

Above, the mapped User class will expand intothe Table to which User is mapped.

#2245

Query.update() supports UPDATE..FROM

The new UPDATE..FROM mechanics work in query.update().Below, we emit an UPDATE against SomeEntity, addinga FROM clause (or equivalent, depending on backend)against SomeOtherEntity:

  1. query(SomeEntity).\
  2. filter(SomeEntity.id==SomeOtherEntity.id).\
  3. filter(SomeOtherEntity.foo=='bar').\
  4. update({"data":"x"})

In particular, updates to joined-inheritanceentities are supported, provided the target of the UPDATE is local to thetable being filtered on, or if the parent and child tablesare mixed, they are joined explicitly in the query. Below,given Engineer as a joined subclass of Person:

  1. query(Engineer).\
  2. filter(Person.id==Engineer.id).\
  3. filter(Person.name=='dilbert').\
  4. update({"engineer_data":"java"})

would produce:

  1. UPDATE engineer SET engineer_data='java' FROM person
  2. WHERE person.id=engineer.id AND person.name='dilbert'

#2365

rollback() will only roll back “dirty” objects from a begin_nested()

A behavioral change that should improve efficiency for thoseusers using SAVEPOINT via Session.begin_nested() - uponrollback(), only those objects that were made dirtysince the last flush will be expired, the rest of theSession remains intact. This because a ROLLBACK to aSAVEPOINT does not terminate the containing transaction’sisolation, so no expiry is needed except for those changesthat were not flushed in the current transaction.

#2452

Caching Example now uses dogpile.cache

The caching example now uses dogpile.cache.Dogpile.cache is a rewrite of the caching portionof Beaker, featuring vastly simpler and faster operation,as well as support for distributed locking.

Note that the SQLAlchemy APIs used by the Dogpile example as wellas the previous Beaker example have changed slightly, in particularthis change is needed as illustrated in the Beaker example:

  1. --- examples/beaker_caching/caching_query.py
  2. +++ examples/beaker_caching/caching_query.py
  3. @@ -222,7 +222,8 @@
  4.  
  5. """
  6. if query._current_path:
  7. - mapper, key = query._current_path[-2:]
  8. + mapper, prop = query._current_path[-2:]
  9. + key = prop.key
  10.  
  11. for cls in mapper.class_.__mro__:
  12. if (cls, key) in self._relationship_options:

See also

dogpile_caching

#2589

New Core Features

Fully extensible, type-level operator support in Core

The Core has to date never had any system of adding supportfor new SQL operators to Column and other expressionconstructs, other than the ColumnOperators.op() methodwhich is “just enough” to make things work. There has alsonever been any system in place for Core which allows thebehavior of existing operators to be overridden. Up untilnow, the only way operators could be flexibly redefined wasin the ORM layer, using column_property() given acomparator_factory argument. Third party librarieslike GeoAlchemy therefore were forced to be ORM-centric andrely upon an array of hacks to apply new operations as wellas to get them to propagate correctly.

The new operator system in Core adds the one hook that’sbeen missing all along, which is to associate new andoverridden operators with types. Since after all, it’snot really a column, CAST operator, or SQL function thatreally drives what kinds of operations are present, it’s thetype of the expression. The implementation details areminimal - only a few extra methods are added to the coreColumnElement type so that it consults itsTypeEngine object for an optional set of operators.New or revised operations can be associated with any type,either via subclassing of an existing type, by usingTypeDecorator, or “globally across-the-board” byattaching a new TypeEngine.Comparator object to an existing typeclass.

For example, to add logarithm support to Numeric types:

  1. from sqlalchemy.types import Numeric
  2. from sqlalchemy.sql import func
  3.  
  4. class CustomNumeric(Numeric):
  5. class comparator_factory(Numeric.Comparator):
  6. def log(self, other):
  7. return func.log(self.expr, other)

The new type is usable like any other type:

  1. data = Table('data', metadata,
  2. Column('id', Integer, primary_key=True),
  3. Column('x', CustomNumeric(10, 5)),
  4. Column('y', CustomNumeric(10, 5))
  5. )
  6.  
  7. stmt = select([data.c.x.log(data.c.y)]).where(data.c.x.log(2) < value)
  8. print(conn.execute(stmt).fetchall())

New features which have come from this immediately includesupport for PostgreSQL’s HSTORE type, as well as newoperations associated with PostgreSQL’s ARRAYtype. It also paves the way for existing types to acquirelots more operators that are specific to those types, suchas more string, integer and date operators.

See also

Redefining and Creating New Operators

HSTORE

#2547

Multiple-VALUES support for Insert

The Insert.values() method now supports a list of dictionaries,which will render a multi-VALUES statement such asVALUES (<row1>), (<row2>), …. This is only relevant to backends whichsupport this syntax, including PostgreSQL, SQLite, and MySQL. It isnot the same thing as the usual executemany() style of INSERT whichremains unchanged:

  1. users.insert().values([
  2. {"name": "some name"},
  3. {"name": "some other name"},
  4. {"name": "yet another name"},
  5. ])

See also

Insert.values()

#2623

Type Expressions

SQL expressions can now be associated with types. Historically,TypeEngine has always allowed Python-side functions whichreceive both bound parameters as well as result row values, passingthem through a Python side conversion function on the way to/back fromthe database. The new feature allows similarfunctionality, except on the database side:

  1. from sqlalchemy.types import String
  2. from sqlalchemy import func, Table, Column, MetaData
  3.  
  4. class LowerString(String):
  5. def bind_expression(self, bindvalue):
  6. return func.lower(bindvalue)
  7.  
  8. def column_expression(self, col):
  9. return func.lower(col)
  10.  
  11. metadata = MetaData()
  12. test_table = Table(
  13. 'test_table',
  14. metadata,
  15. Column('data', LowerString)
  16. )

Above, the LowerString type defines a SQL expression that will be emittedwhenever the test_table.c.data column is rendered in the columnsclause of a SELECT statement:

  1. >>> print(select([test_table]).where(test_table.c.data == 'HI'))
  2. SELECT lower(test_table.data) AS data
  3. FROM test_table
  4. WHERE test_table.data = lower(:data_1)

This feature is also used heavily by the new release of GeoAlchemy,to embed PostGIS expressions inline in SQL based on type rules.

See also

Applying SQL-level Bind/Result Processing

#1534

Core Inspection System

The inspect() function introduced in New Class/Object Inspection Systemalso applies to the core. Applied to an Engine it producesan Inspector object:

  1. from sqlalchemy import inspect
  2. from sqlalchemy import create_engine
  3.  
  4. engine = create_engine("postgresql://scott:tiger@localhost/test")
  5. insp = inspect(engine)
  6. print(insp.get_table_names())

It can also be applied to any ClauseElement, which returnsthe ClauseElement itself, such as Table, Column,Select, etc. This allows it to work fluently between Coreand ORM constructs.

New Method Select.correlate_except()

select() now has a method Select.correlate_except()which specifies “correlate on all FROM clauses except thosespecified”. It can be used for mapping scenarios wherea related subquery should correlate normally, exceptagainst a particular target selectable:

  1. class SnortEvent(Base):
  2. __tablename__ = "event"
  3.  
  4. id = Column(Integer, primary_key=True)
  5. signature = Column(Integer, ForeignKey("signature.id"))
  6.  
  7. signatures = relationship("Signature", lazy=False)
  8.  
  9. class Signature(Base):
  10. __tablename__ = "signature"
  11.  
  12. id = Column(Integer, primary_key=True)
  13.  
  14. sig_count = column_property(
  15. select([func.count('*')]).\
  16. where(SnortEvent.signature == id).
  17. correlate_except(SnortEvent)
  18. )

See also

Select.correlate_except()

PostgreSQL HSTORE type

Support for PostgreSQL’s HSTORE type is now available aspostgresql.HSTORE. This type makes great usageof the new operator system to provide a full range of operatorsfor HSTORE types, including index access, concatenation,and containment methods such ashas_key(),has_any(), andmatrix():

  1. from sqlalchemy.dialects.postgresql import HSTORE
  2.  
  3. data = Table('data_table', metadata,
  4. Column('id', Integer, primary_key=True),
  5. Column('hstore_data', HSTORE)
  6. )
  7.  
  8. engine.execute(
  9. select([data.c.hstore_data['some_key']])
  10. ).scalar()
  11.  
  12. engine.execute(
  13. select([data.c.hstore_data.matrix()])
  14. ).scalar()

See also

postgresql.HSTORE

postgresql.hstore

#2606

Enhanced PostgreSQL ARRAY type

The postgresql.ARRAY type will accept an optional“dimension” argument, pinning it to a fixed number ofdimensions and greatly improving efficiency when retrievingresults:

  1. # old way, still works since PG supports N-dimensions per row:
  2. Column("my_array", postgresql.ARRAY(Integer))
  3.  
  4. # new way, will render ARRAY with correct number of [] in DDL,
  5. # will process binds and results more efficiently as we don't need
  6. # to guess how many levels deep to go
  7. Column("my_array", postgresql.ARRAY(Integer, dimensions=2))

The type also introduces new operators, using the new type-specificoperator framework. New operations include indexed access:

  1. result = conn.execute(
  2. select([mytable.c.arraycol[2]])
  3. )

slice access in SELECT:

  1. result = conn.execute(
  2. select([mytable.c.arraycol[2:4]])
  3. )

slice updates in UPDATE:

  1. conn.execute(
  2. mytable.update().values({mytable.c.arraycol[2:3]: [7, 8]})
  3. )

freestanding array literals:

  1. >>> from sqlalchemy.dialects import postgresql
  2. >>> conn.scalar(
  3. ... select([
  4. ... postgresql.array([1, 2]) + postgresql.array([3, 4, 5])
  5. ... ])
  6. ... )
  7. [1, 2, 3, 4, 5]

array concatenation, where below, the right side [4, 5, 6] is coerced into an array literal:

  1. select([mytable.c.arraycol + [4, 5, 6]])

See also

postgresql.ARRAY

postgresql.array

#2441

New, configurable DATE, TIME types for SQLite

SQLite has no built-in DATE, TIME, or DATETIME types, andinstead provides some support for storage of date and timevalues either as strings or integers. The date and timetypes for SQLite are enhanced in 0.8 to be much moreconfigurable as to the specific format, including that the“microseconds” portion is optional, as well as pretty mucheverything else.

  1. Column('sometimestamp', sqlite.DATETIME(truncate_microseconds=True))
  2. Column('sometimestamp', sqlite.DATETIME(
  3. storage_format=(
  4. "%(year)04d%(month)02d%(day)02d"
  5. "%(hour)02d%(minute)02d%(second)02d%(microsecond)06d"
  6. ),
  7. regexp="(\d{4})(\d{2})(\d{2})(\d{2})(\d{2})(\d{2})(\d{6})"
  8. )
  9. )
  10. Column('somedate', sqlite.DATE(
  11. storage_format="%(month)02d/%(day)02d/%(year)04d",
  12. regexp="(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)",
  13. )
  14. )

Huge thanks to Nate Dub for the sprinting on this at Pycon 2012.

See also

sqlite.DATETIME

sqlite.DATE

sqlite.TIME

#2363

“COLLATE” supported across all dialects; in particular MySQL, PostgreSQL, SQLite

The “collate” keyword, long accepted by the MySQL dialect, is now establishedon all String types and will render on any backend, includingwhen features such as MetaData.create_all() and cast() is used:

  1. >>> stmt = select([cast(sometable.c.somechar, String(20, collation='utf8'))])
  2. >>> print(stmt)
  3. SELECT CAST(sometable.somechar AS VARCHAR(20) COLLATE "utf8") AS anon_1
  4. FROM sometable

See also

String

#2276

“Prefixes” now supported for update(), delete()

Geared towards MySQL, a “prefix” can be rendered within any ofthese constructs. E.g.:

  1. stmt = table.delete().prefix_with("LOW_PRIORITY", dialect="mysql")
  2.  
  3.  
  4. stmt = table.update().prefix_with("LOW_PRIORITY", dialect="mysql")

The method is new in addition to those which already existedon insert(), select() and Query.

See also

Update.prefix_with()

Delete.prefix_with()

Insert.prefix_with()

Select.prefix_with()

Query.prefix_with()

#2431

Behavioral Changes

The consideration of a “pending” object as an “orphan” has been made more aggressive

This is a late add to the 0.8 series, however it is hoped that the new behavioris generally more consistent and intuitive in a wider variety ofsituations. The ORM has since at least version 0.4 included behaviorsuch that an object that’s “pending”, meaning that it’sassociated with a Session but hasn’t been inserted into the databaseyet, is automatically expunged from the Session when it becomes an “orphan”,which means it has been de-associated with a parent object that refers to itwith delete-orphan cascade on the configured relationship(). Thisbehavior is intended to approximately mirror the behavior of a persistent(that is, already inserted) object, where the ORM will emit a DELETE for suchobjects that become orphans based on the interception of detachment events.

The behavioral change comes into play for objects thatare referred to by multiple kinds of parents that each specify delete-orphan; thetypical example is an association object that bridges two other kinds of objectsin a many-to-many pattern. Previously, the behavior was such that thepending object would be expunged only when de-associated with all of its parents.With the behavioral change, the pending objectis expunged as soon as it is de-associated from any of the parents that it waspreviously associated with. This behavior is intended to more closelymatch that of persistent objects, which are deleted as soonas they are de-associated from any parent.

The rationale for the older behavior dates backat least to version 0.4, and was basically a defensive decision to try to alleviateconfusion when an object was still being constructed for INSERT. But the realityis that the object is re-associated with the Session as soon as it isattached to any new parent in any case.

It’s still possible to flush an objectthat is not associated with all of its required parents, if the object was eithernot associated with those parents in the first place, or if it was expunged, but thenre-associated with a Session via a subsequent attachment event but stillnot fully associated. In this situation, it is expected that the databasewould emit an integrity error, as there are likely NOT NULL foreign key columnsthat are unpopulated. The ORM makes the decision to let these INSERT attemptsoccur, based on the judgment that an object that is only partially associated withits required parents but has been actively associated with some of them,is more often than not a user error, rather than an intentionalomission which should be silently skipped - silently skipping the INSERT here wouldmake user errors of this nature very hard to debug.

The old behavior, for applications that might have been relying upon it, can be re-enabled forany Mapper by specifying the flag legacy_is_orphan as a mapperoption.

The new behavior allows the following test case to work:

  1. from sqlalchemy import Column, Integer, String, ForeignKey
  2. from sqlalchemy.orm import relationship, backref
  3. from sqlalchemy.ext.declarative import declarative_base
  4.  
  5. Base = declarative_base()
  6.  
  7. class User(Base):
  8. __tablename__ = 'user'
  9. id = Column(Integer, primary_key=True)
  10. name = Column(String(64))
  11.  
  12. class UserKeyword(Base):
  13. __tablename__ = 'user_keyword'
  14. user_id = Column(Integer, ForeignKey('user.id'), primary_key=True)
  15. keyword_id = Column(Integer, ForeignKey('keyword.id'), primary_key=True)
  16.  
  17. user = relationship(User,
  18. backref=backref("user_keywords",
  19. cascade="all, delete-orphan")
  20. )
  21.  
  22. keyword = relationship("Keyword",
  23. backref=backref("user_keywords",
  24. cascade="all, delete-orphan")
  25. )
  26.  
  27. # uncomment this to enable the old behavior
  28. # __mapper_args__ = {"legacy_is_orphan": True}
  29.  
  30. class Keyword(Base):
  31. __tablename__ = 'keyword'
  32. id = Column(Integer, primary_key=True)
  33. keyword = Column('keyword', String(64))
  34.  
  35. from sqlalchemy import create_engine
  36. from sqlalchemy.orm import Session
  37.  
  38. # note we're using PostgreSQL to ensure that referential integrity
  39. # is enforced, for demonstration purposes.
  40. e = create_engine("postgresql://scott:tiger@localhost/test", echo=True)
  41.  
  42. Base.metadata.drop_all(e)
  43. Base.metadata.create_all(e)
  44.  
  45. session = Session(e)
  46.  
  47. u1 = User(name="u1")
  48. k1 = Keyword(keyword="k1")
  49.  
  50. session.add_all([u1, k1])
  51.  
  52. uk1 = UserKeyword(keyword=k1, user=u1)
  53.  
  54. # previously, if session.flush() were called here,
  55. # this operation would succeed, but if session.flush()
  56. # were not called here, the operation fails with an
  57. # integrity error.
  58. # session.flush()
  59. del u1.user_keywords[0]
  60.  
  61. session.commit()

#2655

The after_attach event fires after the item is associated with the Session instead of before; before_attach added

Event handlers which use after_attach can now assume thegiven instance is associated with the given session:

  1. @event.listens_for(Session, "after_attach")def after_attach(session, instance): assert instance in session

Some use cases require that it work this way. However,other use cases require that the item is not yet part ofthe session, such as when a query, intended to load somestate required for an instance, emits autoflush first andwould otherwise prematurely flush the target object. Thoseuse cases should use the new “before_attach” event:

  1. @event.listens_for(Session, "before_attach")def before_attach(session, instance): instance.some_necessary_attribute = session.query(Widget).\ filter_by(instance.widget_name).\ first()

#2464

Query now auto-correlates like a select() does

Previously it was necessary to call Query.correlate() inorder to have a column- or WHERE-subquery correlate to theparent:

  1. subq = session.query(Entity.value).\
  2. filter(Entity.id==Parent.entity_id).\
  3. correlate(Parent).\
  4. as_scalar()
  5. session.query(Parent).filter(subq=="some value")

This was the opposite behavior of a plain select()construct which would assume auto-correlation by default.The above statement in 0.8 will correlate automatically:

  1. subq = session.query(Entity.value).\
  2. filter(Entity.id==Parent.entity_id).\
  3. as_scalar()
  4. session.query(Parent).filter(subq=="some value")

like in select(), correlation can be disabled by callingquery.correlate(None) or manually set by passing anentity, query.correlate(someentity).

#2179

Correlation is now always context-specific

To allow a wider variety of correlation scenarios, the behavior ofSelect.correlate() and Query.correlate() has changed slightlysuch that the SELECT statement will omit the “correlated” target from theFROM clause only if the statement is actually used in that context. Additionally,it’s no longer possible for a SELECT statement that’s placed as a FROMin an enclosing SELECT statement to “correlate” (i.e. omit) a FROM clause.

This change only makes things better as far as rendering SQL, in that it’s nolonger possible to render illegal SQL where there are insufficient FROMobjects relative to what’s being selected:

  1. from sqlalchemy.sql import table, column, select
  2.  
  3. t1 = table('t1', column('x'))
  4. t2 = table('t2', column('y'))
  5. s = select([t1, t2]).correlate(t1)
  6.  
  7. print(s)

Prior to this change, the above would return:

  1. SELECT t1.x, t2.y FROM t2

which is invalid SQL as “t1” is not referred to in any FROM clause.

Now, in the absence of an enclosing SELECT, it returns:

  1. SELECT t1.x, t2.y FROM t1, t2

Within a SELECT, the correlation takes effect as expected:

  1. s2 = select([t1, t2]).where(t1.c.x == t2.c.y).where(t1.c.x == s)
  2.  
  3. print(s2)
  4.  
  5. SELECT t1.x, t2.y FROM t1, t2
  6. WHERE t1.x = t2.y AND t1.x =
  7. (SELECT t1.x, t2.y FROM t2)

This change is not expected to impact any existing applications, asthe correlation behavior remains identical for properly constructedexpressions. Only an application that relies, most likely within atesting scenario, on the invalid string output of a correlatedSELECT used in a non-correlating context would see any change.

#2668

create_all() and drop_all() will now honor an empty list as such

The methods MetaData.create_all() and MetaData.drop_all()will now accept a list of Table objects that is empty,and will not emit any CREATE or DROP statements. Previously,an empty list was interpreted the same as passing Nonefor a collection, and CREATE/DROP would be emitted for allitems unconditionally.

This is a bug fix but some applications may have been relying uponthe previous behavior.

#2664

Repaired the Event Targeting of InstrumentationEvents

The InstrumentationEvents series of event targets havedocumented that the events will only be fired off according tothe actual class passed as a target. Through 0.7, this wasn’t thecase, and any event listener applied to InstrumentationEventswould be invoked for all classes mapped. In 0.8, additionallogic has been added so that the events will only invoke for thoseclasses sent in. The propagate flag here is set to Trueby default as class instrumentation events are typically used tointercept classes that aren’t yet created.

#2590

No more magic coercion of “=” to IN when comparing to subquery in MS-SQL

We found a very old behavior in the MSSQL dialect whichwould attempt to rescue users from themselves whendoing something like this:

  1. scalar_subq = select([someothertable.c.id]).where(someothertable.c.data=='foo')
  2. select([sometable]).where(sometable.c.id==scalar_subq)

SQL Server doesn’t allow an equality comparison to a scalarSELECT, that is, “x = (SELECT something)”. The MSSQL dialectwould convert this to an IN. The same thing would happenhowever upon a comparison like “(SELECT something) = x”, andoverall this level of guessing is outside of SQLAlchemy’susual scope so the behavior is removed.

#2277

Fixed the behavior of Session.is_modified()

The Session.is_modified() method accepts an argumentpassive which basically should not be necessary, theargument in all cases should be the value True - whenleft at its default of False it would have the effect ofhitting the database, and often triggering autoflush whichwould itself change the results. In 0.8 the passiveargument will have no effect, and unloaded attributes willnever be checked for history since by definition there canbe no pending state change on an unloaded attribute.

See also

Session.is_modified()

#2320

Column.key is honored in the Select.c attribute of select() with Select.apply_labels()

Users of the expression system know that Select.apply_labels()prepends the table name to each column name, affecting thenames that are available from Select.c:

  1. s = select([table1]).apply_labels()
  2. s.c.table1_col1
  3. s.c.table1_col2

Before 0.8, if the Column had a different Column.key, thiskey would be ignored, inconsistently versus whenSelect.apply_labels() were not used:

  1. # before 0.8
  2. table1 = Table('t1', metadata,
  3. Column('col1', Integer, key='column_one')
  4. )
  5. s = select([table1])
  6. s.c.column_one # would be accessible like this
  7. s.c.col1 # would raise AttributeError
  8.  
  9. s = select([table1]).apply_labels()
  10. s.c.table1_column_one # would raise AttributeError
  11. s.c.table1_col1 # would be accessible like this

In 0.8, Column.key is honored in both cases:

  1. # with 0.8
  2. table1 = Table('t1', metadata,
  3. Column('col1', Integer, key='column_one')
  4. )
  5. s = select([table1])
  6. s.c.column_one # works
  7. s.c.col1 # AttributeError
  8.  
  9. s = select([table1]).apply_labels()
  10. s.c.table1_column_one # works
  11. s.c.table1_col1 # AttributeError

All other behavior regarding “name” and “key” are the same,including that the rendered SQL will still use the form<tablename>_<colname> - the emphasis here was onpreventing the Column.key contents from being rendered into theSELECT statement so that there are no issues withspecial/ non-ascii characters used in the Column.key.

#2397

single_parent warning is now an error

A relationship() that is many-to-one or many-to-many andspecifies “cascade=’all, delete-orphan’”, which is anawkward but nonetheless supported use case (withrestrictions) will now raise an error if the relationshipdoes not specify the single_parent=True option.Previously it would only emit a warning, but a failure wouldfollow almost immediately within the attribute system in anycase.

#2405

Adding the inspector argument to the column_reflect event

0.7 added a new event called column_reflect, provided sothat the reflection of columns could be augmented as eachone were reflected. We got this event slightly wrong inthat the event gave no way to get at the currentInspector and Connection being used for thereflection, in the case that additional information from thedatabase is needed. As this is a new event not widely usedyet, we’ll be adding the inspector argument into itdirectly:

  1. @event.listens_for(Table, "column_reflect")def listen_for_col(inspector, table, column_info):

  2. # ...</pre>
  3. #2418

  4. Disabling auto-detect of collations, casing for MySQL

    The MySQL dialect does two calls, one very expensive, toload all possible collations from the database as well asinformation on casing, the first time an Engineconnects. Neither of these collections are used for anySQLAlchemy functions, so these calls will be changed to nolonger be emitted automatically. Applications that mighthave relied on these collections being present onengine.dialect will need to call upon_detect_collations() and _detect_casing() directly.

  5. #2404

  6. Unconsumed column names warning becomes an exception

    Referring to a non-existent column in an insert() orupdate() construct will raise an error instead of awarning:

  7. t1 = table('t1', column('x'))
  8. t1.insert().values(x=5, z=5) # raises "Unconsumed column names: z"
  9.  
  10.  
  11.  
  12. #2415

  13. Inspector.get_primary_keys() is deprecated, use Inspector.get_pk_constraint

    These two methods on Inspector were redundant, whereget_primary_keys() would return the same information asget_pk_constraint() minus the name of the constraint:

  14. >>> insp.get_primary_keys()
  15. ["a", "b"]
  16. >>> insp.get_pk_constraint()
  17. {"name":"pk_constraint", "constrained_columns":["a", "b"]}
  18.  
  19.  
  20.  
  21. #2422

  22. Case-insensitive result row names will be disabled in most cases

    A very old behavior, the column names in RowProxy werealways compared case-insensitively:

  23. >>> row = result.fetchone()
  24. >>> row['foo'] == row['FOO'] == row['Foo']
  25. True
  26.  
  27.  
  28.  
  29. This was for the benefit of a few dialects which in theearly days needed this, like Oracle and Firebird, but inmodern usage we have more accurate ways of dealing with thecase-insensitive behavior of these two platforms.

  30. Going forward, this behavior will be available onlyoptionally, by passing the flag case_sensitive=Falseto create_engine(), but otherwise column namesrequested from the row must match as far as casing.

  31. #2423

  32. InstrumentationManager and alternate class instrumentation is now an extension

    The sqlalchemy.orm.interfaces.InstrumentationManagerclass is moved tosqlalchemy.ext.instrumentation.InstrumentationManager.The alternate instrumentation system was built for thebenefit of a very small number of installations that neededto work with existing or unusual class instrumentationsystems, and generally is very seldom used. The complexityof this system has been exported to an ext. module. Itremains unused until once imported, typically when a thirdparty library imports InstrumentationManager, at whichpoint it is injected back into sqlalchemy.orm byreplacing the default InstrumentationFactory withExtendedInstrumentationRegistry.

  33. Removed

    SQLSoup

    SQLSoup is a handy package that presents an alternativeinterface on top of the SQLAlchemy ORM. SQLSoup is nowmoved into its own project and documented/releasedseparately; see https://bitbucket.org/zzzeek/sqlsoup.

  34. SQLSoup is a very simple tool that could also benefit fromcontributors who are interested in its style of usage.

  35. #2262

  36. MutableType

    The older mutable system within the SQLAlchemy ORM hasbeen removed. This refers to the MutableType interfacewhich was applied to types such as PickleType andconditionally to TypeDecorator, and since very earlySQLAlchemy versions has provided a way for the ORM to detectchanges in so-called mutable data structures such as JSONstructures and pickled objects. However, theimplementation was never reasonable and forced a veryinefficient mode of usage on the unit-of-work which causedan expensive scan of all objects to take place during flush.In 0.7, the sqlalchemy.ext.mutable extension wasintroduced so that user-defined datatypes can appropriatelysend events to the unit of work as changes occur.

  37. Today, usage of MutableType is expected to be low, aswarnings have been in place for some years now regarding itsinefficiency.

  38. #2442

  39. sqlalchemy.exceptions (has been sqlalchemy.exc for years)

    We had left in an alias sqlalchemy.exceptions to attemptto make it slightly easier for some very old libraries thathadnt yet been upgraded to use sqlalchemy.exc. Someusers are still being confused by it however so in 0.8 weretaking it out entirely to eliminate any of that confusion.

  40. #2433