Postgresql Extensions

The postgresql extensions module provides a number of “postgres-only” functions, currently:

In the future I would like to add support for more of postgresql’s features. If there is a particular feature you would like to see added, please open a Github issue.

Warning

In order to start using the features described below, you will need to use the extension PostgresqlExtDatabase class instead of PostgresqlDatabase.

The code below will assume you are using the following database and base model:

  1. from playhouse.postgres_ext import *
  2. ext_db = PostgresqlExtDatabase('peewee_test', user='postgres')
  3. class BaseExtModel(Model):
  4. class Meta:
  5. database = ext_db

JSON Support

peewee has basic support for Postgres’ native JSON data type, in the form of JSONField. As of version 2.4.7, peewee also supports the Postgres 9.4 binary json jsonb type, via BinaryJSONField.

Warning

Postgres supports a JSON data type natively as of 9.2 (full support in 9.3). In order to use this functionality you must be using the correct version of Postgres with psycopg2 version 2.5 or greater.

To use BinaryJSONField, which has many performance and querying advantages, you must have Postgres 9.4 or later.

Note

You must be sure your database is an instance of PostgresqlExtDatabase in order to use the JSONField.

Here is an example of how you might declare a model with a JSON field:

  1. import json
  2. import urllib2
  3. from playhouse.postgres_ext import *
  4. db = PostgresqlExtDatabase('my_database')
  5. class APIResponse(Model):
  6. url = CharField()
  7. response = JSONField()
  8. class Meta:
  9. database = db
  10. @classmethod
  11. def request(cls, url):
  12. fh = urllib2.urlopen(url)
  13. return cls.create(url=url, response=json.loads(fh.read()))
  14. APIResponse.create_table()
  15. # Store a JSON response.
  16. offense = APIResponse.request('http://crime-api.com/api/offense/')
  17. booking = APIResponse.request('http://crime-api.com/api/booking/')
  18. # Query a JSON data structure using a nested key lookup:
  19. offense_responses = APIResponse.select().where(
  20. APIResponse.response['meta']['model'] == 'offense')
  21. # Retrieve a sub-key for each APIResponse. By calling .as_json(), the
  22. # data at the sub-key will be returned as Python objects (dicts, lists,
  23. # etc) instead of serialized JSON.
  24. q = (APIResponse
  25. .select(
  26. APIResponse.data['booking']['person'].as_json().alias('person'))
  27. .where(APIResponse.data['meta']['model'] == 'booking'))
  28. for result in q:
  29. print(result.person['name'], result.person['dob'])

The BinaryJSONField works the same and supports the same operations as the regular JSONField, but provides several additional operations for testing containment. Using the binary json field, you can test whether your JSON data contains other partial JSON structures (contains(), contains_any(), contains_all()), or whether it is a subset of a larger JSON document (contained_by()).

For more examples, see the JSONField and BinaryJSONField API documents below.

hstore support

Postgresql hstore is an embedded key/value store. With hstore, you can store arbitrary key/value pairs in your database alongside structured relational data.

To use hstore, you need to specify an additional parameter when instantiating your PostgresqlExtDatabase:

  1. # Specify "register_hstore=True":
  2. db = PostgresqlExtDatabase('my_db', register_hstore=True)

Currently the postgres_ext module supports the following operations:

  • Store and retrieve arbitrary dictionaries
  • Filter by key(s) or partial dictionary
  • Update/add one or more keys to an existing dictionary
  • Delete one or more keys from an existing dictionary
  • Select keys, values, or zip keys and values
  • Retrieve a slice of keys/values
  • Test for the existence of a key
  • Test that a key has a non-NULL value

Using hstore

To start with, you will need to import the custom database class and the hstore functions from playhouse.postgres_ext (see above code snippet). Then, it is as simple as adding a HStoreField to your model:

  1. class House(BaseExtModel):
  2. address = CharField()
  3. features = HStoreField()

You can now store arbitrary key/value pairs on House instances:

  1. >>> h = House.create(
  2. ... address='123 Main St',
  3. ... features={'garage': '2 cars', 'bath': '2 bath'})
  4. ...
  5. >>> h_from_db = House.get(House.id == h.id)
  6. >>> h_from_db.features
  7. {'bath': '2 bath', 'garage': '2 cars'}

You can filter by individual key, multiple keys or partial dictionary:

  1. >>> query = House.select()
  2. >>> garage = query.where(House.features.contains('garage'))
  3. >>> garage_and_bath = query.where(House.features.contains(['garage', 'bath']))
  4. >>> twocar = query.where(House.features.contains({'garage': '2 cars'}))

Suppose you want to do an atomic update to the house:

  1. >>> new_features = House.features.update({'bath': '2.5 bath', 'sqft': '1100'})
  2. >>> query = House.update(features=new_features)
  3. >>> query.where(House.id == h.id).execute()
  4. 1
  5. >>> h = House.get(House.id == h.id)
  6. >>> h.features
  7. {'bath': '2.5 bath', 'garage': '2 cars', 'sqft': '1100'}

Or, alternatively an atomic delete:

  1. >>> query = House.update(features=House.features.delete('bath'))
  2. >>> query.where(House.id == h.id).execute()
  3. 1
  4. >>> h = House.get(House.id == h.id)
  5. >>> h.features
  6. {'garage': '2 cars', 'sqft': '1100'}

Multiple keys can be deleted at the same time:

  1. >>> query = House.update(features=House.features.delete('garage', 'sqft'))

You can select just keys, just values, or zip the two:

  1. >>> for h in House.select(House.address, House.features.keys().alias('keys')):
  2. ... print(h.address, h.keys)
  3. 123 Main St [u'bath', u'garage']
  4. >>> for h in House.select(House.address, House.features.values().alias('vals')):
  5. ... print(h.address, h.vals)
  6. 123 Main St [u'2 bath', u'2 cars']
  7. >>> for h in House.select(House.address, House.features.items().alias('mtx')):
  8. ... print(h.address, h.mtx)
  9. 123 Main St [[u'bath', u'2 bath'], [u'garage', u'2 cars']]

You can retrieve a slice of data, for example, all the garage data:

  1. >>> query = House.select(House.address, House.features.slice('garage').alias('garage_data'))
  2. >>> for house in query:
  3. ... print(house.address, house.garage_data)
  4. 123 Main St {'garage': '2 cars'}

You can check for the existence of a key and filter rows accordingly:

  1. >>> has_garage = House.features.exists('garage')
  2. >>> for house in House.select(House.address, has_garage.alias('has_garage')):
  3. ... print(house.address, house.has_garage)
  4. 123 Main St True
  5. >>> for house in House.select().where(House.features.exists('garage')):
  6. ... print(house.address, house.features['garage']) # <-- just houses w/garage data
  7. 123 Main St 2 cars

Interval support

Postgres supports durations through the INTERVAL data-type (docs).

class IntervalField([null=False[, ]])

Field class capable of storing Python datetime.timedelta instances.

Example:

  1. from datetime import timedelta
  2. from playhouse.postgres_ext import *
  3. db = PostgresqlExtDatabase('my_db')
  4. class Event(Model):
  5. location = CharField()
  6. duration = IntervalField()
  7. start_time = DateTimeField()
  8. class Meta:
  9. database = db
  10. @classmethod
  11. def get_long_meetings(cls):
  12. return cls.select().where(cls.duration > timedelta(hours=1))

Server-side cursors

When psycopg2 executes a query, normally all results are fetched and returned to the client by the backend. This can cause your application to use a lot of memory when making large queries. Using server-side cursors, results are returned a little at a time (by default 2000 records). For the definitive reference, please see the psycopg2 documentation.

Note

To use server-side (or named) cursors, you must be using PostgresqlExtDatabase.

To execute a query using a server-side cursor, simply wrap your select query using the ServerSide() helper:

  1. large_query = PageView.select() # Build query normally.
  2. # Iterate over large query inside a transaction.
  3. for page_view in ServerSide(large_query):
  4. # do some interesting analysis here.
  5. pass
  6. # Server-side resources are released.

If you would like all SELECT queries to automatically use a server-side cursor, you can specify this when creating your PostgresqlExtDatabase:

  1. from postgres_ext import PostgresqlExtDatabase
  2. ss_db = PostgresqlExtDatabase('my_db', server_side_cursors=True)

Note

Server-side cursors live only as long as the transaction, so for this reason peewee will not automatically call commit() after executing a SELECT query. If you do not commit after you are done iterating, you will not release the server-side resources until the connection is closed (or the transaction is committed later). Furthermore, since peewee will by default cache rows returned by the cursor, you should always call .iterator() when iterating over a large query.

If you are using the ServerSide() helper, the transaction and call to iterator() will be handled transparently.

Full-text search

Postgresql provides sophisticated full-text search using special data-types (tsvector and tsquery). Documents should be stored or converted to the tsvector type, and search queries should be converted to tsquery.

For simple cases, you can simply use the Match() function, which will automatically perform the appropriate conversions, and requires no schema changes:

  1. def blog_search(query):
  2. return Blog.select().where(
  3. (Blog.status == Blog.STATUS_PUBLISHED) &
  4. Match(Blog.content, query))

The Match() function will automatically convert the left-hand operand to a tsvector, and the right-hand operand to a tsquery. For better performance, it is recommended you create a GIN index on the column you plan to search:

  1. CREATE INDEX blog_full_text_search ON blog USING gin(to_tsvector(content));

Alternatively, you can use the TSVectorField to maintain a dedicated column for storing tsvector data:

  1. class Blog(Model):
  2. content = TextField()
  3. search_content = TSVectorField()

You will need to explicitly convert the incoming text data to tsvector when inserting or updating the search_content field:

  1. content = 'Excellent blog post about peewee ORM.'
  2. blog_entry = Blog.create(
  3. content=content,
  4. search_content=fn.to_tsvector(content))

Note

If you are using the TSVectorField, it will automatically be created with a GIN index.

postgres_ext API notes

class PostgresqlExtDatabase(database[, server_side_cursors=False[, register_hstore=False[, ]]])

Identical to PostgresqlDatabase but required in order to support:

Parameters:
  • database (str) – Name of database to connect to.
  • server_side_cursors (bool) – Whether SELECT queries should utilize server-side cursors.
  • register_hstore (bool) – Register the HStore extension with the connection.

If you wish to use the HStore extension, you must specify register_hstore=True.

If using server_side_cursors, also be sure to wrap your queries with ServerSide().

ServerSide(select_query)

Parameters:select_query – a SelectQuery instance.
Rtype generator:
 

Wrap the given select query in a transaction, and call it’s iterator() method to avoid caching row instances. In order for the server-side resources to be released, be sure to exhaust the generator (iterate over all the rows).

Usage:

  1. large_query = PageView.select()
  2. for page_view in ServerSide(large_query):
  3. # Do something interesting.
  4. pass
  5. # At this point server side resources are released.

class ArrayField([field_class=IntegerField[, field_kwargs=None[, dimensions=1[, convert_values=False]]]])

Parameters:
  • field_class – a subclass of Field, e.g. IntegerField.
  • field_kwargs (dict) – arguments to initialize field_class.
  • dimensions (int) – dimensions of array.
  • convert_values (bool) – apply field_class value conversion to array data.

Field capable of storing arrays of the provided field_class.

Note

By default ArrayField will use a GIN index. To disable this, initialize the field with index=False.

You can store and retrieve lists (or lists-of-lists):

  1. class BlogPost(BaseModel):
  2. content = TextField()
  3. tags = ArrayField(CharField)
  4. post = BlogPost(content='awesome', tags=['foo', 'bar', 'baz'])

Additionally, you can use the __getitem__ API to query values or slices in the database:

  1. # Get the first tag on a given blog post.
  2. first_tag = (BlogPost
  3. .select(BlogPost.tags[0].alias('first_tag'))
  4. .where(BlogPost.id == 1)
  5. .dicts()
  6. .get())
  7. # first_tag = {'first_tag': 'foo'}

Get a slice of values:

  1. # Get the first two tags.
  2. two_tags = (BlogPost
  3. .select(BlogPost.tags[:2].alias('two'))
  4. .dicts()
  5. .get())
  6. # two_tags = {'two': ['foo', 'bar']}
  • contains(\items*)

    Parameters:items – One or more items that must be in the given array field.
    1. # Get all blog posts that are tagged with both "python" and "django".
    2. Blog.select().where(Blog.tags.contains('python', 'django'))
  • contains_any(\items*)

    Parameters:items – One or more items to search for in the given array field.

    Like contains(), except will match rows where the array contains any of the given items.

    1. # Get all blog posts that are tagged with "flask" and/or "django".
    2. Blog.select().where(Blog.tags.contains_any('flask', 'django'))

class DateTimeTZField(\args, **kwargs*)

A timezone-aware subclass of DateTimeField.

class HStoreField(\args, **kwargs*)

A field for storing and retrieving arbitrary key/value pairs. For details on usage, see hstore support.

Attention

To use the HStoreField you will need to be sure the hstore extension is registered with the connection. To accomplish this, instantiate the PostgresqlExtDatabase with register_hstore=True.

Note

By default HStoreField will use a GiST index. To disable this, initialize the field with index=False.

  • keys()

    Returns the keys for a given row.

    1. >>> for h in House.select(House.address, House.features.keys().alias('keys')):
    2. ... print(h.address, h.keys)
    3. 123 Main St [u'bath', u'garage']
  • values()

    Return the values for a given row.

    1. >>> for h in House.select(House.address, House.features.values().alias('vals')):
    2. ... print(h.address, h.vals)
    3. 123 Main St [u'2 bath', u'2 cars']
  • items()

    Like python’s dict, return the keys and values in a list-of-lists:

    1. >>> for h in House.select(House.address, House.features.items().alias('mtx')):
    2. ... print(h.address, h.mtx)
    3. 123 Main St [[u'bath', u'2 bath'], [u'garage', u'2 cars']]
  • slice(\args*)

    Return a slice of data given a list of keys.

    1. >>> for h in House.select(House.address, House.features.slice('garage').alias('garage_data')):
    2. ... print(h.address, h.garage_data)
    3. 123 Main St {'garage': '2 cars'}
  • exists(key)

    Query for whether the given key exists.

    1. >>> for h in House.select(House.address, House.features.exists('garage').alias('has_garage')):
    2. ... print(h.address, h.has_garage)
    3. 123 Main St True
    4. >>> for h in House.select().where(House.features.exists('garage')):
    5. ... print(h.address, h.features['garage']) # <-- just houses w/garage data
    6. 123 Main St 2 cars
  • defined(key)

    Query for whether the given key has a value associated with it.

  • update(\*data*)

    Perform an atomic update to the keys/values for a given row or rows.

    1. >>> query = House.update(features=House.features.update(
    2. ... sqft=2000,
    3. ... year_built=2012))
    4. >>> query.where(House.id == 1).execute()
  • delete(\keys*)

    Delete the provided keys for a given row or rows.

    Note

    We will use an UPDATE query.

    1. >>> query = House.update(features=House.features.delete(
    2. ... 'sqft', 'year_built'))
    3. >>> query.where(House.id == 1).execute()
  • contains(value)

    Parameters:value – Either a dict, a list of keys, or a single key.

    Query rows for the existence of either:

    • a partial dictionary.
    • a list of keys.
    • a single key.
    1. >>> query = House.select()
    2. >>> has_garage = query.where(House.features.contains('garage'))
    3. >>> garage_bath = query.where(House.features.contains(['garage', 'bath']))
    4. >>> twocar = query.where(House.features.contains({'garage': '2 cars'}))
  • contains_any(\keys*)

    Parameters:keys – One or more keys to search for.

    Query rows for the existence of any key.

class JSONField(dumps=None, \args, **kwargs*)

Parameters:dumps – The default is to call json.dumps() or the dumps function. You can override this method to create a customized JSON wrapper.

Field class suitable for storing and querying arbitrary JSON. When using this on a model, set the field’s value to a Python object (either a dict or a list). When you retrieve your value from the database it will be returned as a Python data structure.

Note

You must be using Postgres 9.2 / psycopg2 2.5 or greater.

Note

If you are using Postgres 9.4, strongly consider using the BinaryJSONField instead as it offers better performance and more powerful querying options.

Example model declaration:

  1. db = PostgresqlExtDatabase('my_db')
  2. class APIResponse(Model):
  3. url = CharField()
  4. response = JSONField()
  5. class Meta:
  6. database = db

Example of storing JSON data:

  1. url = 'http://foo.com/api/resource/'
  2. resp = json.loads(urllib2.urlopen(url).read())
  3. APIResponse.create(url=url, response=resp)
  4. APIResponse.create(url='http://foo.com/baz/', response={'key': 'value'})

To query, use Python’s [] operators to specify nested key or array lookups:

  1. APIResponse.select().where(
  2. APIResponse.response['key1']['nested-key'] == 'some-value')

To illustrate the use of the [] operators, imagine we have the following data stored in an APIResponse:

  1. {
  2. "foo": {
  3. "bar": ["i1", "i2", "i3"],
  4. "baz": {
  5. "huey": "mickey",
  6. "peewee": "nugget"
  7. }
  8. }
  9. }

Here are the results of a few queries:

  1. def get_data(expression):
  2. # Helper function to just retrieve the results of a
  3. # particular expression.
  4. query = (APIResponse
  5. .select(expression.alias('my_data'))
  6. .dicts()
  7. .get())
  8. return query['my_data']
  9. # Accessing the foo -> bar subkey will return a JSON
  10. # representation of the list.
  11. get_data(APIResponse.data['foo']['bar'])
  12. # '["i1", "i2", "i3"]'
  13. # In order to retrieve this list as a Python list,
  14. # we will call .as_json() on the expression.
  15. get_data(APIResponse.data['foo']['bar'].as_json())
  16. # ['i1', 'i2', 'i3']
  17. # Similarly, accessing the foo -> baz subkey will
  18. # return a JSON representation of the dictionary.
  19. get_data(APIResponse.data['foo']['baz'])
  20. # '{"huey": "mickey", "peewee": "nugget"}'
  21. # Again, calling .as_json() will return an actual
  22. # python dictionary.
  23. get_data(APIResponse.data['foo']['baz'].as_json())
  24. # {'huey': 'mickey', 'peewee': 'nugget'}
  25. # When dealing with simple values, either way works as
  26. # you expect.
  27. get_data(APIResponse.data['foo']['bar'][0])
  28. # 'i1'
  29. # Calling .as_json() when the result is a simple value
  30. # will return the same thing as the previous example.
  31. get_data(APIResponse.data['foo']['bar'][0].as_json())
  32. # 'i1'

class BinaryJSONField(dumps=None, \args, **kwargs*)

Parameters:dumps – The default is to call json.dumps() or the dumps function. You can override this method to create a customized JSON wrapper.

Store and query arbitrary JSON documents. Data should be stored using normal Python dict and list objects, and when data is returned from the database, it will be returned using dict and list as well.

For examples of basic query operations, see the above code samples for JSONField. The example queries below will use the same APIResponse model described above.

Note

By default BinaryJSONField will use a GiST index. To disable this, initialize the field with index=False.

Note

You must be using Postgres 9.4 / psycopg2 2.5 or newer. If you are using Postgres 9.2 or 9.3, you can use the regular JSONField instead.

  • contains(other)

    Test whether the given JSON data contains the given JSON fragment or key.

    Example:

    1. search_fragment = {
    2. 'foo': {'bar': ['i2']}
    3. }
    4. query = (APIResponse
    5. .select()
    6. .where(APIResponse.data.contains(search_fragment)))
    7. # If we're searching for a list, the list items do not need to
    8. # be ordered in a particular way:
    9. query = (APIResponse
    10. .select()
    11. .where(APIResponse.data.contains({
    12. 'foo': {'bar': ['i2', 'i1']}})))

    We can pass in simple keys as well. To find APIResponses that contain the key foo at the top-level:

    1. APIResponse.select().where(APIResponse.data.contains('foo'))

    We can also search sub-keys using square-brackets:

    1. APIResponse.select().where(
    2. APIResponse.data['foo']['bar'].contains(['i2', 'i1']))
  • contains_any(\items*)

    Search for the presence of one or more of the given items.

    1. APIResponse.select().where(
    2. APIResponse.data.contains_any('foo', 'baz', 'nugget'))

    Like contains(), we can also search sub-keys:

    1. APIResponse.select().where(
    2. APIResponse.data['foo']['bar'].contains_any('i2', 'ix'))
  • contains_all(\items*)

    Search for the presence of all of the given items.

    1. APIResponse.select().where(
    2. APIResponse.data.contains_all('foo'))

    Like contains_any(), we can also search sub-keys:

    1. APIResponse.select().where(
    2. APIResponse.data['foo']['bar'].contains_all('i1', 'i2', 'i3'))
  • contained_by(other)

    Test whether the given JSON document is contained by (is a subset of) the given JSON document. This method is the inverse of contains().

    1. big_doc = {
    2. 'foo': {
    3. 'bar': ['i1', 'i2', 'i3'],
    4. 'baz': {
    5. 'huey': 'mickey',
    6. 'peewee': 'nugget',
    7. }
    8. },
    9. 'other_key': ['nugget', 'bear', 'kitten'],
    10. }
    11. APIResponse.select().where(
    12. APIResponse.data.contained_by(big_doc))

Match(field, query)

Generate a full-text search expression, automatically converting the left-hand operand to a tsvector, and the right-hand operand to a tsquery.

Example:

  1. def blog_search(query):
  2. return Blog.select().where(
  3. (Blog.status == Blog.STATUS_PUBLISHED) &
  4. Match(Blog.content, query))

class TSVectorField

Field type suitable for storing tsvector data. This field will automatically be created with a GIN index for improved search performance.

Note

Data stored in this field will still need to be manually converted to the tsvector type.

Note

By default TSVectorField will use a GIN index. To disable this, initialize the field with index=False.

Example usage:

  1. class Blog(Model):
  2. content = TextField()
  3. search_content = TSVectorField()
  4. content = 'this is a sample blog entry.'
  5. blog_entry = Blog.create(
  6. content=content,
  7. search_content=fn.to_tsvector(content)) # Note `to_tsvector()`.