APIs

  • class SqliteExtDatabase(database[, pragmas=None[, timeout=5[, c_extensions=None[, rank_functions=True[, hash_functions=False[, regexp_function=False[, bloomfilter=False]]]]]]])

Parameters:

  • pragmas (list) – A list of 2-tuples containing pragma key and value toset every time a connection is opened.
  • timeout – Set the busy-timeout on the SQLite driver (in seconds).
  • c_extensions (bool) – Declare that C extension speedups must/must-notbe used. If set to True and the extension module is not available,will raise an ImproperlyConfigured exception.
  • rank_functions (bool) – Make search result ranking functions available.
  • hash_functions (bool) – Make hashing functions available (md5, sha1, etc).
  • regexp_function (bool) – Make the REGEXP function available.
  • bloomfilter (bool) – Make the bloom filter available.

Extends SqliteDatabase and inherits methods for declaringuser-defined functions, pragmas, etc.

  • class CSqliteExtDatabase(database[, pragmas=None[, timeout=5[, c_extensions=None[, rank_functions=True[, hash_functions=False[, regexp_function=False[, bloomfilter=False[, replace_busy_handler=False]]]]]]]])

Parameters:

  • pragmas (list) – A list of 2-tuples containing pragma key and value toset every time a connection is opened.
  • timeout – Set the busy-timeout on the SQLite driver (in seconds).
  • c_extensions (bool) – Declare that C extension speedups must/must-notbe used. If set to True and the extension module is not available,will raise an ImproperlyConfigured exception.
  • rank_functions (bool) – Make search result ranking functions available.
  • hash_functions (bool) – Make hashing functions available (md5, sha1, etc).
  • regexp_function (bool) – Make the REGEXP function available.
  • bloomfilter (bool) – Make the bloom filter available.
  • replace_busy_handler (bool) – Use a smarter busy-handler implementation.

Extends SqliteExtDatabase and requires that theplayhouse._sqlite_ext extension module be available.

  • oncommit(_fn)
  • Register a callback to be executed whenever a transaction is committedon the current connection. The callback accepts no parameters and thereturn value is ignored.

However, if the callback raises a ValueError, thetransaction will be aborted and rolled-back.

Example:

  1. db = CSqliteExtDatabase(':memory:')
  2.  
  3. @db.on_commit
  4. def on_commit():
  5. logger.info('COMMITing changes')
  • onrollback(_fn)
  • Register a callback to be executed whenever a transaction is rolledback on the current connection. The callback accepts no parameters andthe return value is ignored.

Example:

  1. @db.on_rollbackdef on_rollback(): logger.info('Rolling back changes')

  • onupdate(_fn)
  • Register a callback to be executed whenever the database is written to(via an UPDATE, INSERT or DELETE query). The callback shouldaccept the following parameters:

    • query - the type of query, either INSERT, UPDATE or DELETE.
    • database name - the default database is named main.
    • table name - name of table being modified.
    • rowid - the rowid of the row being modified.The callback’s return value is ignored.

Example:

  1. db = CSqliteExtDatabase(':memory:')
  2.  
  3. @db.on_update
  4. def on_update(query_type, db, table, rowid):
  5. # e.g. INSERT row 3 into table users.
  6. logger.info('%s row %s into table %s', query_type, rowid, table)
  • changes()
  • Return the number of rows modified in the currently-open transaction.

  • autocommit

  • Property which returns a boolean indicating if autocommit is enabled.By default, this value will be True except when inside atransaction (or atomic() block).

Example:

  1. >>> db = CSqliteExtDatabase(':memory:')
  2. >>> db.autocommit
  3. True
  4. >>> with db.atomic():
  5. ... print(db.autocommit)
  6. ...
  7. False
  8. >>> db.autocommit
  9. True
  • backup(destination[, pages=None, name=None, progress=None])

Parameters:

  1. - **destination** ([_SqliteDatabase_]($91d5d4e449d7d4b4.md#SqliteDatabase)) – Database object to serve asdestination for the backup.
  2. - **pages** (_int_) Number of pages per iteration. Default value of -1indicates all pages should be backed-up in a single step.
  3. - **name** (_str_) Name of source database (may differ if you used ATTACHDATABASE to load multiple databases). Defaults to main”.
  4. - **progress** Progress callback, called with three parameters: thenumber of pages remaining, the total page count, and whether thebackup is complete.

Example:

  1. master = CSqliteExtDatabase('master.db')
  2. replica = CSqliteExtDatabase('replica.db')
  3.  
  4. # Backup the contents of master to replica.
  5. master.backup(replica)
  • backupto_file(_filename[, pages, name, progress])

Parameters:

  1. - **filename** Filename to store the database backup.
  2. - **pages** (_int_) Number of pages per iteration. Default value of -1indicates all pages should be backed-up in a single step.
  3. - **name** (_str_) Name of source database (may differ if you used ATTACHDATABASE to load multiple databases). Defaults to main”.
  4. - **progress** Progress callback, called with three parameters: thenumber of pages remaining, the total page count, and whether thebackup is complete.

Backup the current database to a file. The backed-up data is not adatabase dump, but an actual SQLite database file.

Example:

  1. db = CSqliteExtDatabase('app.db')
  2.  
  3. def nightly_backup():
  4. filename = 'backup-%s.db' % (datetime.date.today())
  5. db.backup_to_file(filename)
  • blobopen(_table, column, rowid[, read_only=False])

Parameters:

  1. - **table** (_str_) Name of table containing data.
  2. - **column** (_str_) Name of column containing data.
  3. - **rowid** (_int_) ID of row to retrieve.
  4. - **read_only** (_bool_) Open the blob for reading only.Returns:

Blob instance which provides efficient access tothe underlying binary data.Return type:Blob

See Blob and ZeroBlob for more information.

Example:

  1. class Image(Model):
  2. filename = TextField()
  3. data = BlobField()
  4.  
  5. buf_size = 1024 * 1024 * 8 # Allocate 8MB for storing file.
  6. rowid = Image.insert({Image.filename: 'thefile.jpg',
  7. Image.data: ZeroBlob(buf_size)}).execute()
  8.  
  9. # Open the blob, returning a file-like object.
  10. blob = db.blob_open('image', 'data', rowid)
  11.  
  12. # Write some data to the blob.
  13. blob.write(image_data)
  14. img_size = blob.tell()
  15.  
  16. # Read the data back out of the blob.
  17. blob.seek(0)
  18. image_data = blob.read(img_size)
  • class RowIDField
  • Primary-key field that corresponds to the SQLite rowid field. For moreinformation, see the SQLite documentation on rowid tables..

Example:

  1. class Note(Model):
  2. rowid = RowIDField() # Will be primary key.
  3. content = TextField()
  4. timestamp = TimestampField()
  • class DocIDField
  • Subclass of RowIDField for use on virtual tables thatspecifically use the convention of docid for the primary key. As far asI know this only pertains to tables using the FTS3 and FTS4 full-textsearch extensions.

Attention

In FTS3 and FTS4, “docid” is simply an alias for “rowid”. To reduceconfusion, it’s probably best to just always use RowIDFieldand never use DocIDField.

  1. class NoteIndex(FTSModel):
  2. docid = DocIDField() # "docid" is used as an alias for "rowid".
  3. content = SearchField()
  4.  
  5. class Meta:
  6. database = db
  • class AutoIncrementField
  • SQLite, by default, may reuse primary key values after rows are deleted. Toensure that the primary key is always monotonically increasing,regardless of deletions, you should use AutoIncrementField.There is a small performance cost for this feature. For more information,see the SQLite docs on autoincrement.
  • class JSONField(json_dumps=None, json_loads=None, )
  • Field class suitable for storing JSON data, with special methods designedto work with the json1 extension.

SQLite 3.9.0 added JSON support inthe form of an extension library. The SQLite json1 extension provides anumber of helper functions for working with JSON data. These APIs areexposed as methods of a special field-type, JSONField.

To access or modify specific object keys or array indexes in a JSONstructure, you can treat the JSONField as if it were adictionary/list.

Parameters:

  • json_dumps – (optional) function for serializing data to JSONstrings. If not provided, will use the stdlib json.dumps.
  • json_loads – (optional) function for de-serializing JSON to Pythonobjects. If not provided, will use the stdlib json.loads.

Note

To customize the JSON serialization or de-serialization, you canspecify a custom json_dumps and json_loads callables. Thesefunctions should accept a single paramter: the object to serialize, andthe JSON string, respectively. To modify the parameters of the stdlibJSON functions, you can use functools.partial:

  1. # Do not escape unicode code-points.
  2. my_json_dumps = functools.partial(json.dumps, ensure_ascii=False)
  3.  
  4. class SomeModel(Model):
  5. # Specify our custom serialization function.
  6. json_data = JSONField(json_dumps=my_json_dumps)

Let’s look at some examples of using the SQLite json1 extension withPeewee. Here we’ll prepare a database and a simple model for testing thejson1 extension:

  1. >>> from playhouse.sqlite_ext import *
  2. >>> db = SqliteExtDatabase(':memory:')
  3. >>> class KV(Model):
  4. ... key = TextField()
  5. ... value = JSONField()
  6. ... class Meta:
  7. ... database = db
  8. ...
  9.  
  10. >>> KV.create_table()

Storing data works as you might expect. There’s no need to serializedictionaries or lists as JSON, as this is done automatically by Peewee:

  1. >>> KV.create(key='a', value={'k1': 'v1'})
  2. <KV: 1>
  3. >>> KV.get(KV.key == 'a').value
  4. {'k1': 'v1'}

We can access specific parts of the JSON data using dictionary lookups:

  1. >>> KV.get(KV.value['k1'] == 'v1').key
  2. 'a'

It’s possible to update a JSON value in-place using the update()method. Note that “k1=v1” is preserved:

  1. >>> KV.update(value=KV.value.update({'k2': 'v2', 'k3': 'v3'})).execute()
  2. 1
  3. >>> KV.get(KV.key == 'a').value
  4. {'k1': 'v1', 'k2': 'v2', 'k3': 'v3'}

We can also update existing data atomically, or remove keys by settingtheir value to None. In the following example, we’ll update the valueof “k1” and remove “k3” (“k2” will not be modified):

  1. >>> KV.update(value=KV.value.update({'k1': 'v1-x', 'k3': None})).execute()
  2. 1
  3. >>> KV.get(KV.key == 'a').value
  4. {'k1': 'v1-x', 'k2': 'v2'}

We can also set individual parts of the JSON data using the set() method:

  1. >>> KV.update(value=KV.value['k1'].set('v1')).execute()
  2. 1
  3. >>> KV.get(KV.key == 'a').value
  4. {'k1': 'v1', 'k2': 'v2'}

The set() method can also be used with objects, inaddition to scalar values:

  1. >>> KV.update(value=KV.value['k2'].set({'x2': 'y2'})).execute()
  2. 1
  3. >>> KV.get(KV.key == 'a').value
  4. {'k1': 'v1', 'k2': {'x2': 'y2'}}

Individual parts of the JSON data can be removed atomically as well, usingremove():

  1. >>> KV.update(value=KV.value['k2'].remove()).execute()
  2. 1
  3. >>> KV.get(KV.key == 'a').value
  4. {'k1': 'v1'}

We can also get the type of value stored at a specific location in the JSONdata using the json_type() method:

  1. >>> KV.select(KV.value.json_type(), KV.value['k1'].json_type()).tuples()[:]
  2. [('object', 'text')]

Let’s add a nested value and then see how to iterate through it’s contentsrecursively using the tree() method:

  1. >>> KV.create(key='b', value={'x1': {'y1': 'z1', 'y2': 'z2'}, 'x2': [1, 2]})
  2. <KV: 2>
  3. >>> tree = KV.value.tree().alias('tree')
  4. >>> query = KV.select(KV.key, tree.c.fullkey, tree.c.value).from_(KV, tree)
  5. >>> query.tuples()[:]
  6. [('a', '$', {'k1': 'v1'}),
  7. ('a', '$.k1', 'v1'),
  8. ('b', '$', {'x1': {'y1': 'z1', 'y2': 'z2'}, 'x2': [1, 2]}),
  9. ('b', '$.x2', [1, 2]),
  10. ('b', '$.x2[0]', 1),
  11. ('b', '$.x2[1]', 2),
  12. ('b', '$.x1', {'y1': 'z1', 'y2': 'z2'}),
  13. ('b', '$.x1.y1', 'z1'),
  14. ('b', '$.x1.y2', 'z2')]

The tree() and children() methodsare powerful. For more information on how to utilize them, see thejson1 extension documentation.

Also note, that JSONField lookups can be chained:

  1. >>> query = KV.select().where(KV.value['x1']['y1'] == 'z1')
  2. >>> for obj in query:
  3. ... print(obj.key, obj.value)
  4. ...
  5.  
  6. 'b', {'x1': {'y1': 'z1', 'y2': 'z2'}, 'x2': [1, 2]}

For more information, refer to the sqlite json1 documentation.

  • getitem(item)

Parameters:item – Access a specific key or array index in the JSON data.Returns:a special object exposing access to the JSON data.Return type:JSONPath

Access a specific key or array index in the JSON data. Returns aJSONPath object, which exposes convenient methods forreading or modifying a particular part of a JSON object.

Example:

  1. # If metadata contains {"tags": ["list", "of", "tags"]}, we can
  2. # extract the first tag in this way:
  3. Post.select(Post, Post.metadata['tags'][0].alias('first_tag'))

For more examples see the JSONPath API documentation.

  • set(value[, as_json=None])

Parameters:

  1. - **value** a scalar value, list, or dictionary.
  2. - **as_json** (_bool_) force the value to be treated as JSON, in whichcase it will be serialized as JSON in Python beforehand. Bydefault, lists and dictionaries are treated as JSON to beserialized, while strings and integers are passed as-is.

Set the value stored in a JSONField.

Uses the json_set() functionfrom the json1 extension.

  • update(data)

Parameters:data – a scalar value, list or dictionary to merge with the datacurrently stored in a JSONField. To remove a particularkey, set that key to None in the updated data.

Merge new data into the JSON value using the RFC-7396 MergePatchalgorithm to apply a patch (data parameter) against the columndata. MergePatch can add, modify, or delete elements of a JSON object,which means update() is a generalized replacementfor both set() and remove().MergePatch treats JSON array objects as atomic, so update() cannotappend to an array, nor modify individual elements of an array.

For more information as well as examples, see the SQLite json_patch()function documentation.

  • remove()
  • Remove the data stored in the JSONField.

Uses the json_remove functionfrom the json1 extension.

  • json_type()
  • Return a string identifying the type of value stored in the column.

The type returned will be one of:

  1. - object
  2. - array
  3. - integer
  4. - real
  5. - true
  6. - false
  7. - text
  8. - null &lt;– the string null means an actual NULL value
  9. - NULL &lt;– an actual NULL value means the path was not found

Uses the json_typefunction from the json1 extension.

  • length()
  • Return the length of the array stored in the column.

Uses the json_array_lengthfunction from the json1 extension.

  • children()
  • The children function corresponds to json_each, a table-valuedfunction that walks the JSON value provided and returns the immediatechildren of the top-level array or object. If a path is specified, thenthat path is treated as the top-most element.

The rows returned by calls to children() have the followingattributes:

  1. - <code>key</code>: the key of the current element relative to its parent.
  2. - <code>value</code>: the value of the current element.
  3. - <code>type</code>: one of the data-types (see [<code>json_type()</code>](#JSONField.json_type)).
  4. - <code>atom</code>: the scalar value for primitive types, <code>NULL</code> for arrays and objects.
  5. - <code>id</code>: a unique ID referencing the current node in the tree.
  6. - <code>parent</code>: the ID of the containing node.
  7. - <code>fullkey</code>: the full path describing the current element.
  8. - <code>path</code>: the path to the container of the current row.

Internally this method uses the json_each(documentation link) function from the json1 extension.

Example usage (compare to tree() method):

  1. class KeyData(Model):
  2. key = TextField()
  3. data = JSONField()
  4.  
  5. KeyData.create(key='a', data={'k1': 'v1', 'x1': {'y1': 'z1'}})
  6. KeyData.create(key='b', data={'x1': {'y1': 'z1', 'y2': 'z2'}})
  7.  
  8. # We will query the KeyData model for the key and all the
  9. # top-level keys and values in it's data field.
  10. kd = KeyData.data.children().alias('children')
  11. query = (KeyData
  12. .select(kd.c.key, kd.c.value, kd.c.fullkey)
  13. .from_(KeyData, kd)
  14. .order_by(kd.c.key)
  15. .tuples())
  16. print(query[:])
  17.  
  18. # PRINTS:
  19. [('a', 'k1', 'v1', '$.k1'),
  20. ('a', 'x1', '{"y1":"z1"}', '$.x1'),
  21. ('b', 'x1', '{"y1":"z1","y2":"z2"}', '$.x1')]
  • tree()
  • The tree function corresponds to json_tree, a table-valuedfunction that recursively walks the JSON value provided and returnsinformation about the keys at each level. If a path is specified, thenthat path is treated as the top-most element.

The rows returned by calls to tree() have the same attributes asrows returned by calls to children():

  1. - <code>key</code>: the key of the current element relative to its parent.
  2. - <code>value</code>: the value of the current element.
  3. - <code>type</code>: one of the data-types (see [<code>json_type()</code>](#JSONField.json_type)).
  4. - <code>atom</code>: the scalar value for primitive types, <code>NULL</code> for arrays and objects.
  5. - <code>id</code>: a unique ID referencing the current node in the tree.
  6. - <code>parent</code>: the ID of the containing node.
  7. - <code>fullkey</code>: the full path describing the current element.
  8. - <code>path</code>: the path to the container of the current row.

Internally this method uses the json_tree(documentation link) function from the json1 extension.

Example usage:

  1. class KeyData(Model):
  2. key = TextField()
  3. data = JSONField()
  4.  
  5. KeyData.create(key='a', data={'k1': 'v1', 'x1': {'y1': 'z1'}})
  6. KeyData.create(key='b', data={'x1': {'y1': 'z1', 'y2': 'z2'}})
  7.  
  8. # We will query the KeyData model for the key and all the
  9. # keys and values in it's data field, recursively.
  10. kd = KeyData.data.tree().alias('tree')
  11. query = (KeyData
  12. .select(kd.c.key, kd.c.value, kd.c.fullkey)
  13. .from_(KeyData, kd)
  14. .order_by(kd.c.key)
  15. .tuples())
  16. print(query[:])
  17.  
  18. # PRINTS:
  19. [('a', None, '{"k1":"v1","x1":{"y1":"z1"}}', '$'),
  20. ('b', None, '{"x1":{"y1":"z1","y2":"z2"}}', '$'),
  21. ('a', 'k1', 'v1', '$.k1'),
  22. ('a', 'x1', '{"y1":"z1"}', '$.x1'),
  23. ('b', 'x1', '{"y1":"z1","y2":"z2"}', '$.x1'),
  24. ('a', 'y1', 'z1', '$.x1.y1'),
  25. ('b', 'y1', 'z1', '$.x1.y1'),
  26. ('b', 'y2', 'z2', '$.x1.y2')]
  • class JSONPath(field[, path=None])

Parameters:

  • field (JSONField) – the field object we intend to access.
  • path (tuple) – Components comprising the JSON path.

A convenient, Pythonic way of representing JSON paths for use withJSONField.

The JSONPath object implements getitem, accumulating pathcomponents, which it can turn into the corresponding json-path expression.

  • getitem(item)

Parameters:item – Access a sub-key key or array index.Returns:a JSONPath representing the new path.

Access a sub-key or array index in the JSON data. Returns aJSONPath object, which exposes convenient methods forreading or modifying a particular part of a JSON object.

Example:

  1. # If metadata contains {"tags": ["list", "of", "tags"]}, we can
  2. # extract the first tag in this way:
  3. first_tag = Post.metadata['tags'][0]
  4. query = (Post
  5. .select(Post, first_tag.alias('first_tag'))
  6. .order_by(first_tag))
  • set(value[, as_json=None])

Parameters:

  1. - **value** a scalar value, list, or dictionary.
  2. - **as_json** (_bool_) force the value to be treated as JSON, in whichcase it will be serialized as JSON in Python beforehand. Bydefault, lists and dictionaries are treated as JSON to beserialized, while strings and integers are passed as-is.

Set the value at the given location in the JSON data.

Uses the json_set() functionfrom the json1 extension.

  • update(data)

Parameters:data – a scalar value, list or dictionary to merge with the dataat the given location in the JSON data. To remove a particular key,set that key to None in the updated data.

Merge new data into the JSON value using the RFC-7396 MergePatchalgorithm to apply a patch (data parameter) against the columndata. MergePatch can add, modify, or delete elements of a JSON object,which means update() is a generalized replacementfor both set() and remove().MergePatch treats JSON array objects as atomic, so update() cannotappend to an array, nor modify individual elements of an array.

For more information as well as examples, see the SQLite json_patch()function documentation.

  • remove()
  • Remove the data stored in at the given location in the JSON data.

Uses the json_type functionfrom the json1 extension.

  • json_type()
  • Return a string identifying the type of value stored at the givenlocation in the JSON data.

The type returned will be one of:

  1. - object
  2. - array
  3. - integer
  4. - real
  5. - true
  6. - false
  7. - text
  8. - null &lt;– the string null means an actual NULL value
  9. - NULL &lt;– an actual NULL value means the path was not found

Uses the json_typefunction from the json1 extension.

  • length()
  • Return the length of the array stored at the given location in the JSONdata.

Uses the json_array_lengthfunction from the json1 extension.

  • children()
  • Table-valued function that exposes the direct descendants of a JSONobject at the given location. See also JSONField.children().

  • tree()

  • Table-valued function that exposes all descendants, recursively, of aJSON object at the given location. See also JSONField.tree().
  • class SearchField([unindexed=False[, column_name=None]])
  • Field-class to be used for columns on models representing full-text searchvirtual tables. The full-text search extensions prohibit the specificationof any typing or constraints on columns. This behavior is enforced by theSearchField, which raises an exception if any configuration isattempted that would be incompatible with the full-text search extensions.

Example model for document search index (timestamp is stored in the tablebut it’s data is not searchable):

  1. class DocumentIndex(FTSModel):
  2. title = SearchField()
  3. content = SearchField()
  4. tags = SearchField()
  5. timestamp = SearchField(unindexed=True)
  • match(term)

Parameters:term (str) – full-text search query/termsReturns:a Expression corresponding to the MATCHoperator.

Sqlite’s full-text search supports searching either the full table,including all indexed columns, or searching individual columns. Thematch() method can be used to restrict search toa single column:

  1. class SearchIndex(FTSModel):
  2. title = SearchField()
  3. body = SearchField()
  4.  
  5. # Search *only* the title field and return results ordered by
  6. # relevance, using bm25.
  7. query = (SearchIndex
  8. .select(SearchIndex, SearchIndex.bm25().alias('score'))
  9. .where(SearchIndex.title.match('python'))
  10. .order_by(SearchIndex.bm25()))

To instead search all indexed columns, use theFTSModel.match() method:

  1. # Searches *both* the title and body and return results ordered by
  2. # relevance, using bm25.
  3. query = (SearchIndex
  4. .select(SearchIndex, SearchIndex.bm25().alias('score'))
  5. .where(SearchIndex.match('python'))
  6. .order_by(SearchIndex.bm25()))
  • class VirtualModel
  • Model class designed to be used to represent virtual tables. The defaultmetadata settings are slightly different, to match those frequently used byvirtual tables.

Metadata options:

  • arguments - arguments passed to the virtual table constructor.
  • extension_module - name of extension to use for virtual table.
    • options - a dictionary of settings to apply in virtual table
    • constructor.
  • primary_key - defaults to False, indicating no primary key.These all are combined in the following way:
  1. CREATE VIRTUAL TABLE <table_name>
  2. USING <extension_module>
  3. ([prefix_arguments, ...] fields, ... [arguments, ...], [options...])

FTSModel subclasses should be defined normally, however there are a couplecaveats:

  • Unique constraints, not null constraints, check constraints and foreignkeys are not supported.
  • Indexes on fields and multi-column indexes are ignored completely
  • Sqlite will treat all column types as TEXT (although youcan store other data types, Sqlite will treat them as text).
  • FTS models contain a rowid field which is automatically created andmanaged by SQLite (unless you choose to explicitly set it during modelcreation). Lookups on this column are fast and efficient.Given these constraints, it is strongly recommended that all fieldsdeclared on an FTSModel subclass be instances ofSearchField (though an exception is made for explicitlydeclaring a RowIDField). Using SearchField willhelp prevent you accidentally creating invalid column constraints. If youwish to store metadata in the index but would not like it to be included inthe full-text index, then specify unindexed=True when instantiating theSearchField.

The only exception to the above is for the rowid primary key, which canbe declared using RowIDField. Lookups on the rowid are veryefficient. If you are using FTS4 you can also use DocIDField,which is an alias for the rowid (though there is no benefit to doing so).

Because of the lack of secondary indexes, it usually makes sense to usethe rowid primary key as a pointer to a row in a regular table. Forexample:

  1. class Document(Model):
  2. # Canonical source of data, stored in a regular table.
  3. author = ForeignKeyField(User, backref='documents')
  4. title = TextField(null=False, unique=True)
  5. content = TextField(null=False)
  6. timestamp = DateTimeField()
  7.  
  8. class Meta:
  9. database = db
  10.  
  11. class DocumentIndex(FTSModel):
  12. # Full-text search index.
  13. rowid = RowIDField()
  14. title = SearchField()
  15. content = SearchField()
  16.  
  17. class Meta:
  18. database = db
  19. # Use the porter stemming algorithm to tokenize content.
  20. options = {'tokenize': 'porter'}

To store a document in the document index, we will INSERT a row intothe DocumentIndex table, manually setting the rowid so that itmatches the primary-key of the corresponding Document:

  1. def store_document(document):
  2. DocumentIndex.insert({
  3. DocumentIndex.rowid: document.id,
  4. DocumentIndex.title: document.title,
  5. DocumentIndex.content: document.content}).execute()

To perform a search and return ranked results, we can query theDocument table and join on the DocumentIndex. This join will beefficient because lookups on an FTSModel’s rowid field are fast:

  1. def search(phrase):
  2. # Query the search index and join the corresponding Document
  3. # object on each search result.
  4. return (Document
  5. .select()
  6. .join(
  7. DocumentIndex,
  8. on=(Document.id == DocumentIndex.rowid))
  9. .where(DocumentIndex.match(phrase))
  10. .order_by(DocumentIndex.bm25()))

Warning

All SQL queries on FTSModel classes will be full-table scansexcept full-text searches and rowid lookups.

If the primary source of the content you are indexing exists in a separatetable, you can save some disk space by instructing SQLite to not store anadditional copy of the search index content. SQLite will still create themetadata and data-structures needed to perform searches on the content, butthe content itself will not be stored in the search index.

To accomplish this, you can specify a table or column using the contentoption. The FTS4 documentationhas more information.

Here is a short example illustrating how to implement this with peewee:

  1. class Blog(Model):
  2. title = TextField()
  3. pub_date = DateTimeField(default=datetime.datetime.now)
  4. content = TextField() # We want to search this.
  5.  
  6. class Meta:
  7. database = db
  8.  
  9. class BlogIndex(FTSModel):
  10. content = SearchField()
  11.  
  12. class Meta:
  13. database = db
  14. options = {'content': Blog.content} # <-- specify data source.
  15.  
  16. db.create_tables([Blog, BlogIndex])
  17.  
  18. # Now, we can manage content in the BlogIndex. To populate the
  19. # search index:
  20. BlogIndex.rebuild()
  21.  
  22. # Optimize the index.
  23. BlogIndex.optimize()

The content option accepts either a single Field or aModel and can reduce the amount of storage used by the databasefile. However, content will need to be manually moved to/from theassociated FTSModel.

  • classmethod match(term)

Parameters:term – Search term or expression.

Generate a SQL expression representing a search for the given term orexpression in the table. SQLite uses the MATCH operator to indicatea full-text search.

Example:

  1. # Search index for "search phrase" and return results ranked
  2. # by relevancy using the BM25 algorithm.
  3. query = (DocumentIndex
  4. .select()
  5. .where(DocumentIndex.match('search phrase'))
  6. .order_by(DocumentIndex.bm25()))
  7. for result in query:
  8. print('Result: %s' % result.title)
  • classmethod search(term[, weights=None[, with_score=False[, score_alias='score'[, explicit_ordering=False]]]])

Parameters:

  1. - **term** (_str_) Search term to use.
  2. - **weights** A list of weights for the columns, ordered with respectto the columns position in the table. **Or**, a dictionary keyed bythe field or field name and mapped to a value.
  3. - **with_score** Whether the score should be returned as part ofthe <code>SELECT</code> statement.
  4. - **score_alias** (_str_) Alias to use for the calculated rank score.This is the attribute you will use to access the scoreif <code>with_score=True</code>.
  5. - **explicit_ordering** (_bool_) Order using full SQL function tocalculate rank, as opposed to simply referencing the score aliasin the ORDER BY clause.

Shorthand way of searching for a term and sorting results by thequality of the match.

Note

This method uses a simplified algorithm for determining therelevance rank of results. For more sophisticated result ranking,use the search_bm25() method.

  1. # Simple search.
  2. docs = DocumentIndex.search('search term')
  3. for result in docs:
  4. print(result.title)
  5.  
  6. # More complete example.
  7. docs = DocumentIndex.search(
  8. 'search term',
  9. weights={'title': 2.0, 'content': 1.0},
  10. with_score=True,
  11. score_alias='search_score')
  12. for result in docs:
  13. print(result.title, result.search_score)
  • classmethod searchbm25(_term[, weights=None[, with_score=False[, score_alias='score'[, explicit_ordering=False]]]])

Parameters:

  1. - **term** (_str_) Search term to use.
  2. - **weights** A list of weights for the columns, ordered with respectto the columns position in the table. **Or**, a dictionary keyed bythe field or field name and mapped to a value.
  3. - **with_score** Whether the score should be returned as part ofthe <code>SELECT</code> statement.
  4. - **score_alias** (_str_) Alias to use for the calculated rank score.This is the attribute you will use to access the scoreif <code>with_score=True</code>.
  5. - **explicit_ordering** (_bool_) Order using full SQL function tocalculate rank, as opposed to simply referencing the score aliasin the ORDER BY clause.

Shorthand way of searching for a term and sorting results by thequality of the match using the BM25 algorithm.

Attention

The BM25 ranking algorithm is only available for FTS4. If you areusing FTS3, use the search() method instead.

  • classmethod searchbm25f(_term[, weights=None[, with_score=False[, score_alias='score'[, explicit_ordering=False]]]])
  • Same as FTSModel.search_bm25(), but using the BM25f variantof the BM25 ranking algorithm.

  • classmethod searchlucene(_term[, weights=None[, with_score=False[, score_alias='score'[, explicit_ordering=False]]]])

  • Same as FTSModel.search_bm25(), but using the result rankingalgorithm from the Lucene search engine.

  • classmethod rank([col1_weight, col2_weight…coln_weight])

Parameters:col_weight (float) – (Optional) weight to give to the ith columnof the model. By default all columns have a weight of 1.0.

Generate an expression that will calculate and return the quality ofthe search match. This rank can be used to sort the search results.A higher rank score indicates a better match.

The rank function accepts optional parameters that allow you tospecify weights for the various columns. If no weights are specified,all columns are considered of equal importance.

Note

The algorithm used by rank() is simple andrelatively quick. For more sophisticated result ranking, use:

  1. - [<code>bm25()</code>](#FTSModel.bm25)
  2. - [<code>bm25f()</code>](#FTSModel.bm25f)
  3. - [<code>lucene()</code>](#FTSModel.lucene)
  1. query = (DocumentIndex
  2. .select(
  3. DocumentIndex,
  4. DocumentIndex.rank().alias('score'))
  5. .where(DocumentIndex.match('search phrase'))
  6. .order_by(DocumentIndex.rank()))
  7.  
  8. for search_result in query:
  9. print search_result.title, search_result.score
  • classmethod bm25([col1_weight, col2_weight…coln_weight])

Parameters:col_weight (float) – (Optional) weight to give to the ith columnof the model. By default all columns have a weight of 1.0.

Generate an expression that will calculate and return the quality ofthe search match using the BM25 algorithm.This value can be used to sort the search results, with higher scorescorresponding to better matches.

Like rank(), bm25 function accepts optionalparameters that allow you to specify weights for the various columns.If no weights are specified, all columns are considered of equalimportance.

Attention

The BM25 result ranking algorithm requires FTS4. If you are usingFTS3, use rank() instead.

  1. query = (DocumentIndex
  2. .select(
  3. DocumentIndex,
  4. DocumentIndex.bm25().alias('score'))
  5. .where(DocumentIndex.match('search phrase'))
  6. .order_by(DocumentIndex.bm25()))
  7.  
  8. for search_result in query:
  9. print(search_result.title, search_result.score)

Note

The above code example is equivalent to calling thesearch_bm25() method:

  1. query = DocumentIndex.search_bm25('search phrase', with_score=True)
  2. for search_result in query:
  3. print(search_result.title, search_result.score)
  • classmethod bm25f([col1_weight, col2_weight…coln_weight])
  • Identical to bm25(), except that it uses the BM25fvariant of the BM25 ranking algorithm.

  • classmethod lucene([col1_weight, col2_weight…coln_weight])

  • Identical to bm25(), except that it uses the Lucenesearch result ranking algorithm.

  • classmethod rebuild()

  • Rebuild the search index – this only works when the content optionwas specified during table creation.

  • classmethod optimize()

  • Optimize the search index.
  • class FTS5Model
  • Subclass of VirtualModel to be used with the FTS5full-text search extensions.

FTS5Model subclasses should be defined normally, however there are a couplecaveats:

  • FTS5 explicitly disallows specification of any constraints, data-type orindexes on columns. For that reason, all columns must be instancesof SearchField.
  • FTS5 models contain a rowid field which is automatically created andmanaged by SQLite (unless you choose to explicitly set it during modelcreation). Lookups on this column are fast and efficient.
  • Indexes on fields and multi-column indexes are not supported.The FTS5 extension comes with a built-in implementation of the BM25ranking function. Therefore, the search and search_bm25 methodshave been overridden to use the builtin ranking functions rather thanuser-defined functions.

  • classmethod fts5_installed()

  • Return a boolean indicating whether the FTS5 extension is installed. Ifit is not installed, an attempt will be made to load the extension.

  • classmethod search(term[, weights=None[, with_score=False[, score_alias='score']]])

Parameters:

  1. - **term** (_str_) Search term to use.
  2. - **weights** A list of weights for the columns, ordered with respectto the columns position in the table. **Or**, a dictionary keyed bythe field or field name and mapped to a value.
  3. - **with_score** Whether the score should be returned as part ofthe <code>SELECT</code> statement.
  4. - **score_alias** (_str_) Alias to use for the calculated rank score.This is the attribute you will use to access the scoreif <code>with_score=True</code>.
  5. - **explicit_ordering** (_bool_) Order using full SQL function tocalculate rank, as opposed to simply referencing the score aliasin the ORDER BY clause.

Shorthand way of searching for a term and sorting results by thequality of the match. The FTS5 extension provides a built-inimplementation of the BM25 algorithm, which is used to rank the resultsby relevance.

Higher scores correspond to better matches.

  1. # Simple search.
  2. docs = DocumentIndex.search('search term')
  3. for result in docs:
  4. print(result.title)
  5.  
  6. # More complete example.
  7. docs = DocumentIndex.search(
  8. 'search term',
  9. weights={'title': 2.0, 'content': 1.0},
  10. with_score=True,
  11. score_alias='search_score')
  12. for result in docs:
  13. print(result.title, result.search_score)
  • classmethod searchbm25(_term[, weights=None[, with_score=False[, score_alias='score']]])
  • With FTS5, search_bm25() is identical to thesearch() method.

  • classmethod rank([col1_weight, col2_weight…coln_weight])

Parameters:col_weight (float) – (Optional) weight to give to the ith columnof the model. By default all columns have a weight of 1.0.

Generate an expression that will calculate and return the quality ofthe search match using the BM25 algorithm.This value can be used to sort the search results, with higher scorescorresponding to better matches.

The rank() function accepts optional parametersthat allow you to specify weights for the various columns. If noweights are specified, all columns are considered of equal importance.

  1. query = (DocumentIndex
  2. .select(
  3. DocumentIndex,
  4. DocumentIndex.rank().alias('score'))
  5. .where(DocumentIndex.match('search phrase'))
  6. .order_by(DocumentIndex.rank()))
  7.  
  8. for search_result in query:
  9. print(search_result.title, search_result.score)

Note

The above code example is equivalent to calling thesearch() method:

  1. query = DocumentIndex.search('search phrase', with_score=True)
  2. for search_result in query:
  3. print(search_result.title, search_result.score)
  • classmethod bm25([col1_weight, col2_weight…coln_weight])
  • Because FTS5 provides built-in support for BM25, thebm25() method is identical to therank() method.

  • classmethod VocabModel([table_type='row'|'col'|'instance'[, table_name=None]])

Parameters:

  1. - **table_type** (_str_) Either row’, col or instance’.
  2. - **table_name** Name for the vocab table. If not specified, will befts5tablename_v”.

Generate a model class suitable for accessing the vocab tablecorresponding to FTS5 search index.

  • class TableFunction
  • Implement a user-defined table-valued function. Unlike a simplescalar or aggregate function, which returnsa single scalar value, a table-valued function can return any number ofrows of tabular data.

Simple example:

  1. from playhouse.sqlite_ext import TableFunction
  2.  
  3.  
  4. class Series(TableFunction):
  5. # Name of columns in each row of generated data.
  6. columns = ['value']
  7.  
  8. # Name of parameters the function may be called with.
  9. params = ['start', 'stop', 'step']
  10.  
  11. def initialize(self, start=0, stop=None, step=1):
  12. """
  13. Table-functions declare an initialize() method, which is
  14. called with whatever arguments the user has called the
  15. function with.
  16. """
  17. self.start = self.current = start
  18. self.stop = stop or float('Inf')
  19. self.step = step
  20.  
  21. def iterate(self, idx):
  22. """
  23. Iterate is called repeatedly by the SQLite database engine
  24. until the required number of rows has been read **or** the
  25. function raises a `StopIteration` signalling no more rows
  26. are available.
  27. """
  28. if self.current > self.stop:
  29. raise StopIteration
  30.  
  31. ret, self.current = self.current, self.current + self.step
  32. return (ret,)
  33.  
  34. # Register the table-function with our database, which ensures it
  35. # is declared whenever a connection is opened.
  36. db.table_function('series')(Series)
  37.  
  38. # Usage:
  39. cursor = db.execute_sql('SELECT * FROM series(?, ?, ?)', (0, 5, 2))
  40. for value, in cursor:
  41. print(value)

Note

A TableFunction must be registered with a databaseconnection before it can be used. To ensure the table function isalways available, you can use theSqliteDatabase.table_function() decorator to register thefunction with the database.

TableFunction implementations must provide two attributes andimplement two methods, described below.

  • columns
  • A list containing the names of the columns for the data returned by thefunction. For example, a function that is used to split a string on adelimiter might specify 3 columns: [substring, start_idx, end_idx].

  • params

  • The names of the parameters the function may be called with. Allparameters, including optional parameters, should be listed. Forexample, a function that is used to split a string on a delimiter mightspecify 2 params: [string, delimiter].

  • name

  • Optional - specify the name for the table function. If not provided,name will be taken from the class name.

  • print_tracebacks = True

  • Print a full traceback for any errors that occur in thetable-function’s callback methods. When set to False, only the genericOperationalError will be visible.

  • initialize(**parameter_values)

Parameters:parameter_values – Parameters the function was called with.Returns:No return value.

The initialize method is called to initialize the table functionwith the parameters the user specified when calling the function.

  • iterate(idx)

Parameters:idx (int) – current iteration stepReturns:A tuple of row data corresponding to the columns namedin the columns attribute.Raises:StopIteration – To signal that no more rows are available.

This function is called repeatedly and returns successive rows of data.The function may terminate before all rows are consumed (especially ifthe user specified a LIMIT on the results). Alternatively, thefunction can signal that no more data is available by raising aStopIteration exception.

  • classmethod register(conn)

Parameters:conn – A sqlite3.Connection object.

Register the table function with a DB-API 2.0 sqlite3.Connectionobject. Table-valued functions must be registered before they canbe used in a query.

Example:

  1. class MyTableFunction(TableFunction):
  2. name = 'my_func'
  3. # ... other attributes and methods ...
  4.  
  5. db = SqliteDatabase(':memory:')
  6. db.connect()
  7.  
  8. MyTableFunction.register(db.connection())

To ensure the TableFunction is registered every time aconnection is opened, use the table_function()decorator.

  • ClosureTable(model_class[, foreign_key=None[, referencing_class=None[, referencing_key=None]]])

Parameters:

  • model_class – The model class containing the nodes in the tree.
  • foreign_key – The self-referential parent-node field on the modelclass. If not provided, peewee will introspect the model to find asuitable key.
  • referencing_class – Intermediate table for a many-to-many relationship.
  • referencing_key – For a many-to-many relationship, the originatingside of the relation.Returns:Returns a VirtualModel for working with a closure table.

Factory function for creating a model class suitable for working with atransitive closuretable. Closure tables are VirtualModel subclasses that workwith the transitive closure SQLite extension. These special tables aredesigned to make it easy to efficiently query hierarchical data. The SQLiteextension manages an AVL tree behind-the-scenes, transparently updating thetree when your table changes and making it easy to perform common querieson hierarchical data.

To use the closure table extension in your project, you need:

  1. $ git clone https://gist.github.com/coleifer/7f3593c5c2a645913b92 closure
  2. $ cd closure/
  • Compile the extension as a shared library, e.g.
  1. $ gcc -g -fPIC -shared closure.c -o closure.so
  • Create a model for your hierarchical data. The only requirement here isthat the model has an integer primary key and a self-referential foreignkey. Any additional fields are fine.
  1. class Category(Model):
  2. name = CharField()
  3. metadata = TextField()
  4. parent = ForeignKeyField('self', index=True, null=True) # Required.
  5.  
  6. # Generate a model for the closure virtual table.
  7. CategoryClosure = ClosureTable(Category)

The self-referentiality can also be achieved via an intermediate table(for a many-to-many relation).

  1. class User(Model):
  2. name = CharField()
  3.  
  4. class UserRelations(Model):
  5. user = ForeignKeyField(User)
  6. knows = ForeignKeyField(User, backref='_known_by')
  7.  
  8. class Meta:
  9. primary_key = CompositeKey('user', 'knows') # Alternatively, a unique index on both columns.
  10.  
  11. # Generate a model for the closure virtual table, specifying the UserRelations as the referencing table
  12. UserClosure = ClosureTable(
  13. User,
  14. referencing_class=UserRelations,
  15. foreign_key=UserRelations.knows,
  16. referencing_key=UserRelations.user)
  • In your application code, make sure you load the extension when youinstantiate your Database object. This is done by passingthe path to the shared library to the load_extension() method.
  1. db = SqliteExtDatabase('my_database.db')
  2. db.load_extension('/path/to/closure')

Warning

There are two caveats you should be aware of when using thetransitiveclosure extension. First, it requires that your _sourcemodel have an integer primary key. Second, it is strongly recommendedthat you create an index on the self-referential foreign key.

Example:

  1. class Category(Model):
  2. name = CharField()
  3. metadata = TextField()
  4. parent = ForeignKeyField('self', index=True, null=True) # Required.
  5.  
  6. # Generate a model for the closure virtual table.
  7. CategoryClosure = ClosureTable(Category)
  8.  
  9. # Create the tables if they do not exist.
  10. db.create_tables([Category, CategoryClosure], True)

It is now possible to perform interesting queries using the data from theclosure table:

  1. # Get all ancestors for a particular node.
  2. laptops = Category.get(Category.name == 'Laptops')
  3. for parent in Closure.ancestors(laptops):
  4. print parent.name
  5.  
  6. # Computer Hardware
  7. # Computers
  8. # Electronics
  9. # All products
  10.  
  11. # Get all descendants for a particular node.
  12. hardware = Category.get(Category.name == 'Computer Hardware')
  13. for node in Closure.descendants(hardware):
  14. print node.name
  15.  
  16. # Laptops
  17. # Desktops
  18. # Hard-drives
  19. # Monitors
  20. # LCD Monitors
  21. # LED Monitors

API of the VirtualModel returned by ClosureTable().

  • class BaseClosureTable
    • id
    • A field for the primary key of the given node.

    • depth

    • A field representing the relative depth of the given node.

    • root

    • A field representing the relative root node.

    • descendants(node[, depth=None[, include_node=False]])

    • Retrieve all descendants of the given node. If a depth isspecified, only nodes at that depth (relative to the given node)will be returned.
  1. node = Category.get(Category.name == 'Electronics')
  2.  
  3. # Direct child categories.
  4. children = CategoryClosure.descendants(node, depth=1)
  5.  
  6. # Grand-child categories.
  7. children = CategoryClosure.descendants(node, depth=2)
  8.  
  9. # Descendants at all depths.
  10. all_descendants = CategoryClosure.descendants(node)
  1. - <code>ancestors</code>(_node_[, _depth=None_[, _include_node=False_]])[](#BaseClosureTable.ancestors)
  2. -

Retrieve all ancestors of the given node. If a depth is specified,only nodes at that depth (relative to the given node) will bereturned.

  1. node = Category.get(Category.name == 'Laptops')
  2.  
  3. # All ancestors.
  4. all_ancestors = CategoryClosure.ancestors(node)
  5.  
  6. # Grand-parent category.
  7. grandparent = CategoryClosure.ancestores(node, depth=2)
  1. - <code>siblings</code>(_node_[, _include_node=False_])[](#BaseClosureTable.siblings)
  2. -

Retrieve all nodes that are children of the specified node’sparent.

Note

For an in-depth discussion of the SQLite transitive closure extension,check out this blog post, Querying Tree Structures in SQLite using Python and the Transitive Closure Extension.

Note

The LSM1 extension has not been released yet (SQLite version 3.22 attime of writing), so consider this feature experimental with potentialto change in subsequent releases.

LSM tables define one primary key column and an arbitrary number ofadditional value columns (which are serialized and stored in a single valuefield in the storage engine). The primary key must be all of the same typeand use one of the following field types:

  • IntegerField
  • TextField
  • BlobFieldSince the LSM storage engine is a key/value store, primary keys (includingintegers) must be specified by the application.

Attention

Secondary indexes are not supported by the LSM engine, so the onlyefficient queries will be lookups (or range queries) on the primarykey. Other fields can be queried and filtered on, but may result in afull table-scan.

Example model declaration:

  1. db = SqliteExtDatabase('my_app.db')
  2. db.load_extension('lsm.so') # Load shared library.
  3.  
  4. class EventLog(LSMTable):
  5. timestamp = IntegerField(primary_key=True)
  6. action = TextField()
  7. sender = TextField()
  8. target = TextField()
  9.  
  10. class Meta:
  11. database = db
  12. filename = 'eventlog.ldb' # LSM data is stored in separate db.
  13.  
  14. # Declare virtual table.
  15. EventLog.create_table()

Example queries:

  1. # Use dictionary operators to get, set and delete rows from the LSM
  2. # table. Slices may be passed to represent a range of key values.
  3. def get_timestamp():
  4. # Return time as integer expressing time in microseconds.
  5. return int(time.time() * 1000000)
  6.  
  7. # Create a new row, at current timestamp.
  8. ts = get_timestamp()
  9. EventLog[ts] = ('pageview', 'search', '/blog/some-post/')
  10.  
  11. # Retrieve row from event log.
  12. log = EventLog[ts]
  13. print(log.action, log.sender, log.target)
  14. # Prints ("pageview", "search", "/blog/some-post/")
  15.  
  16. # Delete the row.
  17. del EventLog[ts]
  18.  
  19. # We can also use the "create()" method.
  20. EventLog.create(
  21. timestamp=get_timestamp(),
  22. action='signup',
  23. sender='newsletter',
  24. target='sqlite-news')

Simple key/value model declaration:

  1. class KV(LSMTable):
  2. key = TextField(primary_key=True)
  3. value = TextField()
  4.  
  5. class Meta:
  6. database = db
  7. filename = 'kv.ldb'
  8.  
  9. db.create_tables([KV])

For tables consisting of a single value field, Peewee will return the valuedirectly when getting a single item. You can also request slices of rows,in which case Peewee returns a corresponding Select query,which can be iterated over. Below are some examples:

  1. >>> KV['k0'] = 'v0'
  2. >>> print(KV['k0'])
  3. 'v0'
  4.  
  5. >>> data = [{'key': 'k%d' % i, 'value': 'v%d' % i} for i in range(20)]
  6. >>> KV.insert_many(data).execute()
  7.  
  8. >>> KV.select().count()
  9. 20
  10.  
  11. >>> KV['k8']
  12. 'v8'
  13.  
  14. >>> list(KV['k4.1':'k7.x']
  15. [Row(key='k5', value='v5'),
  16. Row(key='k6', value='v6'),
  17. Row(key='k7', value='v7')]
  18.  
  19. >>> list(KV['k6xxx':])
  20. [Row(key='k7', value='v7'),
  21. Row(key='k8', value='v8'),
  22. Row(key='k9', value='v9')]

You can also index the LSMTable using expressions:

  1. >>> list(KV[KV.key > 'k6'])
  2. [Row(key='k7', value='v7'),
  3. Row(key='k8', value='v8'),
  4. Row(key='k9', value='v9')]
  5.  
  6. >>> list(KV[(KV.key > 'k6') & (KV.value != 'v8')])
  7. [Row(key='k7', value='v7'),
  8. Row(key='k9', value='v9')]

You can delete single rows using del or multiple rows using slicesor expressions:

  1. >>> del KV['k1']
  2. >>> del KV['k3x':'k8']
  3. >>> del KV[KV.key.between('k10', 'k18')]
  4.  
  5. >>> list(KV[:])
  6. [Row(key='k0', value='v0'),
  7. Row(key='k19', value='v19'),
  8. Row(key='k2', value='v2'),
  9. Row(key='k3', value='v3'),
  10. Row(key='k9', value='v9')]

Attempting to get a single non-existant key will result in a KeyError,but slices will not raise an exception:

  1. >>> KV['k1']
  2. ...
  3. KeyError: 'k1'
  4.  
  5. >>> list(KV['k1':'k1'])
  6. []
  • class ZeroBlob(length)

Parameters:length (int) – Size of blob in bytes.

ZeroBlob is used solely to reserve space for storing a BLOBthat supports incremental I/O. To use the SQLite BLOB-storeit is necessary to first insert a ZeroBlob of the desired size into therow you wish to use with incremental I/O.

For example, see Blob.

  • class Blob(database, table, column, rowid[, read_only=False])

Parameters:

  • databaseSqliteExtDatabase instance.
  • table (str) – Name of table being accessed.
  • column (str) – Name of column being accessed.
  • rowid (int) – Primary-key of row being accessed.
  • read_only (bool) – Prevent any modifications to the blob data.

Open a blob, stored in the given table/column/row, for incremental I/O.To allocate storage for new data, you can use the ZeroBlob,which is very efficient.

  1. class RawData(Model):
  2. data = BlobField()
  3.  
  4. # Allocate 100MB of space for writing a large file incrementally:
  5. query = RawData.insert({'data': ZeroBlob(1024 * 1024 * 100)})
  6. rowid = query.execute()
  7.  
  8. # Now we can open the row for incremental I/O:
  9. blob = Blob(db, 'rawdata', 'data', rowid)
  10.  
  11. # Read from the file and write to the blob in chunks of 4096 bytes.
  12. while True:
  13. data = file_handle.read(4096)
  14. if not data:
  15. break
  16. blob.write(data)
  17.  
  18. bytes_written = blob.tell()
  19. blob.close()
  • read([n=None])

Parameters:n (int) – Only read up to n bytes from current position in file.

Read up to n bytes from the current position in the blob file. If _n_is not specified, the entire blob will be read.

  • seek(offset[, whence=0])

Parameters:

  1. - **offset** (_int_) Seek to the given offset in the file.
  2. - **whence** (_int_) Seek relative to the specified frame of reference.

Values for whence:

  1. - <code>0</code>: beginning of file
  2. - <code>1</code>: current position
  3. - <code>2</code>: end of file
  • tell()
  • Return current offset within the file.

  • write(data)

Parameters:data (bytes) – Data to be written

Writes the given data, starting at the current position in the file.

  • close()
  • Close the file and free associated resources.

  • reopen(rowid)

Parameters:rowid (int) – Primary key of row to open.

If a blob has already been opened for a given table/column, you can usethe reopen() method to re-use the same Blobobject for accessing multiple rows in the table.