Plugin Development - Storing Custom Entities

While not all plugins need it, your plugin might need to store more than its configuration in the database. In that case, Kong provides you with an abstraction on top of its primary datastores which allows you to store custom entities.

As explained in the previous chapter, Kong interacts with the model layer through classes we refer to as “DAOs”, and available on a singleton often referred to as the “DAO Factory”. This chapter will explain how to to provide an abstraction for your own entities.

Modules

  1. kong.plugins.<plugin_name>.daos
  2. kong.plugins.<plugin_name>.migrations.init
  3. kong.plugins.<plugin_name>.migrations.000_base_<plugin_name>
  4. kong.plugins.<plugin_name>.migrations.001_<from-version>_to_<to_version>
  5. kong.plugins.<plugin_name>.migrations.002_<from-version>_to_<to_version>

Create the migrations folder

Once you have defined your model, you must create your migration modules which will be executed by Kong to create the table in which your records of your entity will be stored.

If your plugin is intended to support both Cassandra and Postgres, then both migrations must be written.

If your plugin doesn’t have it already, you should add a <plugin_name>/migrations folder to it. If there is no init.lua file inside already, you should create one. This is where all the migrations for your plugin will be referenced.

The initial version of your migrations/init.lua file will point to a single migration.

In this case we have called it 000_base_my_plugin.

  1. -- `migrations/init.lua`
  2. return {
  3. "000_base_my_plugin",
  4. }

This means that there will be a file in <plugin_name>/migrations/000_base_my_plugin.lua containing the initial migrations. We’ll see how this is done in a minute.

Add a new migration to an existing plugin

Sometimes it is necessary to introduce changes after a version of a plugin has already been released. A new functionality might be needed. A database table row might need changing.

When this happens, you must create a new migrations file. You must not of modify the existing migration files once they are published (you can still make them more robust and bulletproof if you want, e.g. always try to write the migrations reentrant).

While there is no strict rule for naming your migration files, there is a convention that the initial one is prefixed by 000, the next one by 001, and so on.

Following with our previous example, if we wanted to release a new version of the plugin with changes in the database (for example, a table was needed called foo) we would insert it by adding a file called <plugin_name>/migrations/001_100_to_110.lua, and referencing it on the migrations init file like so (where 100 is the previous version of the plugin 1.0.0 and 110 is the version to which plugin is migrated to 1.1.0:

  1. -- `<plugin_name>/migrations/init.lua`
  2. return {
  3. "000_base_my_plugin",
  4. "001_100_to_110",
  5. }

Migration file syntax

While Kong’s core migrations support both Postgres and Cassandra, custom plugins can choose to support either both of them or just one.

A migration file is a Lua file which returns a table with the following structure:

  1. -- `<plugin_name>/migrations/000_base_my_plugin.lua`
  2. return {
  3. postgresql = {
  4. up = [[
  5. CREATE TABLE IF NOT EXISTS "my_plugin_table" (
  6. "id" UUID PRIMARY KEY,
  7. "created_at" TIMESTAMP WITHOUT TIME ZONE,
  8. "col1" TEXT
  9. );
  10. DO $$
  11. BEGIN
  12. CREATE INDEX IF NOT EXISTS "my_plugin_table_col1"
  13. ON "my_plugin_table" ("col1");
  14. EXCEPTION WHEN UNDEFINED_COLUMN THEN
  15. -- Do nothing, accept existing state
  16. END$$;
  17. ]],
  18. },
  19. cassandra = {
  20. up = [[
  21. CREATE TABLE IF NOT EXISTS my_plugin_table (
  22. id uuid PRIMARY KEY,
  23. created_at timestamp,
  24. col1 text
  25. );
  26. CREATE INDEX IF NOT EXISTS ON my_plugin_table (col1);
  27. ]],
  28. }
  29. }
  30. -- `<plugin_name>/migrations/001_100_to_110.lua`
  31. return {
  32. postgresql = {
  33. up = [[
  34. DO $$
  35. BEGIN
  36. ALTER TABLE IF EXISTS ONLY "my_plugin_table" ADD "cache_key" TEXT UNIQUE;
  37. EXCEPTION WHEN DUPLICATE_COLUMN THEN
  38. -- Do nothing, accept existing state
  39. END;
  40. $$;
  41. ]],
  42. teardown = function(connector, helpers)
  43. assert(connector:connect_migrations())
  44. assert(connector:query([[
  45. DO $$
  46. BEGIN
  47. ALTER TABLE IF EXISTS ONLY "my_plugin_table" DROP "col1";
  48. EXCEPTION WHEN UNDEFINED_COLUMN THEN
  49. -- Do nothing, accept existing state
  50. END$$;
  51. ]])
  52. end,
  53. },
  54. cassandra = {
  55. up = [[
  56. ALTER TABLE my_plugin_table ADD cache_key text;
  57. CREATE INDEX IF NOT EXISTS ON my_plugin_table (cache_key);
  58. ]],
  59. teardown = function(connector, helpers)
  60. assert(connector:connect_migrations())
  61. assert(connector:query("ALTER TABLE my_plugin_table DROP col1"))
  62. end,
  63. }
  64. }

If a plugin only supports Postgres or Cassandra, only the section for one strategy is needed. Each strategy section has two parts, up and teardown.

  • up is an optional string of raw SQL/CQL statements. Those statements will be executed when kong migrations up is executed.
  • teardown is an optional Lua function, which takes a connector parameter. Such connector can invoke the query method to execute SQL/CQL queries. Teardown is triggered by kong migrations finish

It is recommended that all the non-destructive operations, such as creation of new tables and addition of new records is done on the up sections, while destructive operations (such as removal of data, changing row types, insertion of new data) is done on the teardown sections.

In both cases, it is recommended that all the SQL/CQL statements are written so that they are as reentrant as possible. DROP TABLE IF EXISTS instead of DROP TABLE, CREATE INDEX IF NOT EXIST instead of CREATE INDEX, etc. If a migration fails for some reason, it is expected that the first attempt at fixing the problem will be simply re-running the migrations.

While Postgres does, Cassandra does not support constraints such as “NOT NULL”, “UNIQUE” or “FOREIGN KEY”, but Kong provides you with such features when you define your model’s schema. Bear in mind that this schema will be the same for both Postgres and Cassandra, hence, you might trade-off a pure SQL schema for one that works with Cassandra too.

IMPORTANT: if your schema uses a unique constraint, then Kong will enforce it for Cassandra, but for Postgres you must set this constraint in the migrations.

To see a real-life example, give a look at the Key-Auth plugin migrations.

Define a schema

The first step to using custom entities in a custom plugin is defining one or more schemas.

A schema is a Lua table which describes entities. There’s structural information like how are the different fields of the entity named and what are their types, which is similar to the fields describing your plugin configuration). Compared to plugin configuration schemas, custom entity schemas require additional metadata (e.g. which field, or fields, constitute the entities’ primary key).

Schemas are to be defined in a module named:

  1. kong.plugins.<plugin_name>.daos

Meaning that there should be a file called <plugin_name>/daos.lua inside your plugin folder. The daos.lua file should return a table containing one or more schemas. For example:

  1. -- daos.lua
  2. local typedefs = require "kong.db.schema.typedefs"
  3. return {
  4. -- this plugin only results in one custom DAO, named `keyauth_credentials`:
  5. keyauth_credentials = {
  6. name = "keyauth_credentials", -- the actual table in the database
  7. endpoint_key = "key",
  8. primary_key = { "id" },
  9. cache_key = { "key" },
  10. generate_admin_api = true,
  11. admin_api_name = "key-auths",
  12. admin_api_nested_name = "key-auth",
  13. fields = {
  14. {
  15. -- a value to be inserted by the DAO itself
  16. -- (think of serial id and the uniqueness of such required here)
  17. id = typedefs.uuid,
  18. },
  19. {
  20. -- also interted by the DAO itself
  21. created_at = typedefs.auto_timestamp_s,
  22. },
  23. {
  24. -- a foreign key to a consumer's id
  25. consumer = {
  26. type = "foreign",
  27. reference = "consumers",
  28. default = ngx.null,
  29. on_delete = "cascade",
  30. },
  31. },
  32. {
  33. -- a unique API key
  34. key = {
  35. type = "string",
  36. required = false,
  37. unique = true,
  38. auto = true,
  39. },
  40. },
  41. },
  42. },
  43. }

This example daos.lua file introduces a single schema called keyauth_credentials.

Here is a description of some top-level properties:

NameTypeDescription
namestring (required)It will be used to determine the DAO name (kong.db.[name]).
primary_keytable (required)Field names forming the entity’s primary key. Schemas support composite keys, even if most Kong core entities currently use an UUID named id. If you are using Cassandra and need a composite key, it should have the same fields as the partition key.
endpoint_keystring (optional)The name of the field used as an alternative identifier on the Admin API. On the example above, key is the endpoint_key. This means that a credential with id = 123 and key = “foo” could be referenced as both /keyauth_credentials/123 and /keyauth_credentials/foo.
cache_keytable (optional)Contains the name of the fields used for generating the cache_key, a string which must unequivocally identify the entity inside Kong’s cache. A unique field, like key in your example, is usually good candidate. In other cases a combination of several fields is preferable.
generate_admin_apiboolean (optional)Whether to auto-generate admin api for the entity or not. By default the admin api is generated for all daos, including custom ones. If you want to create a fully customized admin api for the dao or want to disable auto-generation for the dao altogether, set this option to false.
admin_api_nameboolean (optional)When generate_admin_api is enabled the admin api auto-generator uses the name to derive the collection urls for the auto-generated admin api. Sometimes you may want to name the collection urls differently from the name. E.g. with DAO keyauth_credentials we actually wanted the auto-generator to generate endpoints for this dao with alternate and more url-friendly name key-auths, e.g. http://<KONG_ADMIN>/key-auths instead of http://<KONG_ADMIN>/keyauth_credentials).
admin_api_nested_nameboolean (optional)Similar to admin_api_name the admin_api_nested_name specifies the name for a dao that admin api auto-generator creates in nested contexts. You only need to use this parameter if you are not happy with name or admin_api_name. Kong for legacy reasons have urls like http://<KONG_ADMIN>/consumers/john/key-auth where key-auth does not follow plural form of http://<KONG_ADMIN>/key-auths. admin_api_nested_name enables you to specify different name in those cases.
fieldstableEach field definition is a table with a single key, which is the field’s name. The table value is a subtable containing the field’s attributes, some of which will be explained below.

Many field attributes encode validation rules. When attempting to insert or update entities using the DAO, these validations will be checked, and an error returned if the provided input doesn’t conform to them.

The typedefs variable (obtained by requiring kong.db.schema.typedefs) is a table containing a lot of useful type definitions and aliases, including typedefs.uuid, the most usual type for the primary key, and typedefs.auto_timestamp_s, for created_at fields. It is used extensively when defining fields.

Here’s a non-exhaustive explanation of some of the field attributes available:

Attribute nametypeDescription
typestringSchemas support the following scalar types: “string”, “integer”, “number” and “boolean”. Compound types like “array”, “record”, or “set” are also supported.

In additon to these values, the type attribute can also take the special “foreign” value, which denotes a foreign relationship.

Each field will need to be backed by database fields of appropriately similar types, created via migrations.

type is the only required attribute for all field definitions.
defaultany (matching with type attribute)Specifies the value the field will have when attempting to insert it, if no value was provided. Default values are always set via Lua, never by the underlying database. It is thus not recommended to set any default values on fields in migrations.
requiredbooleanWhen set to true on a field, an error will be thrown when attempting to insert an entity lacking a value for said field (unless the field in question has a default value).
uniqueboolean

When set to true on a field, an error will be thrown when attempting to insert an entity on the database, but another entity already has the given value on said field.

This attribute must be backed up by declaring fields as UNIQUE in migrations when using PostgreSQL. The Cassandra strategy does a check in Lua before attempting inserts, so it doesn’t require any special treatment.

autobooleanWhen attempting to insert an entity without providing a value for this a field where auto is set to true,

  • If type == “uuid”, the field will take a random UUID as value.
  • If type == “string”, the field will take a random string.
  • If the field name is created_at or updated_at, the field will take the current time when inserting / updating, as appropriate.
referencestringRequired for fields of type foreign. The given string must be the name of an existing schema, to which the foreign key will “point to”. This means that if a schema B has a foreign key pointing to schema A, then A needs to be loaded before B.
on_deletestringOptional and exclusive for fields of type foreign. It dictates what must happen with entities linked by a foreign key when the entity being referenced is deleted. It can have three possible values:

  • “cascade”: When the linked entity is deleted, all the dependent entities must also be deleted.
  • “null”: When the linked entity is deleted, all the dependent entities will have their foreign key field set to null.
  • “restrict”: Attempting to delete an entity with linked entities will result in an error.


In Cassandra this is handled with pure Lua code, but in PostgreSQL it will be necessary to declare the references as ON DELETE CASCADE/NULL/RESTRICT in a migration.

To learn more about schemas, see:

The custom DAO

The schemas are not used directly to interact with the database. Instead, a DAO is built for each valid schema. A DAO takes the name of the schema it wraps, and is accessible through the kong.db interface.

For the example schema above, the DAO generated would be available for plugins via kong.db.keyauth_credentials.

Select an entity

  1. local entity, err, err_t = kong.db.<name>:select(primary_key)

Attempts to find an entity in the database and return it. Three things can happen:

  • The entity was found. In this case, it is returned as a regular Lua table.
  • An error occurred - for example the connection with the database was lost. In that case the first returned value will be nil, the second one will be a string describing the error, and the last one will be the same error in table form.
  • An error does not occur but the entity is not found. Then the function will just return nil, with no error.

Example of usage:

  1. local entity, err = kong.db.keyauth_credentials:select({
  2. id = "c77c50d2-5947-4904-9f37-fa36182a71a9"
  3. })
  4. if err then
  5. kong.log.err("Error when inserting keyauth credential: " .. err)
  6. return nil
  7. end
  8. if not entity then
  9. kong.log.err("Could not find credential.")
  10. return nil
  11. end

Iterate over all the entities

  1. for entity, err on kong.db.<name>:each(entities_per_page) do
  2. if err then
  3. ...
  4. end
  5. ...
  6. end

This method efficiently iterates over all the entities in the database by making paginated requests. The entities_per_page parameter, which defaults to 100, controls how many entities per page are returned.

On each iteration, a new entity will be returned or, if there is any error, the err variable will be filled up with an error. The recommended way to iterate is checking err first, and otherwise assume that entity is present.

Example of usage:

  1. for credential, err on kong.db.keyauth_credentials:each(1000) do
  2. if err then
  3. kong.log.err("Error when iterating over keyauth credentials: " .. err)
  4. return nil
  5. end
  6. kong.log("id: " .. credential.id)
  7. end

This example iterates over the credentials in pages of 1000 items, logging their ids unless an error happens.

Insert an entity

  1. local entity, err, err_t = kong.db.<name>:insert(<values>)

Inserts an entity in the database, and returns a copy of the inserted entity, or nil, an error message (a string) and a table describing the error in table form.

When the insert is successful, the returned entity contains the extra values produced by default and auto.

The following example uses the keyauth_credentials DAO to insert a credential for a given Consumer, setting its key to "secret". Notice the syntax for referencing foreign keys.

  1. local entity, err = kong.db.keyauth_credentials:insert({
  2. consumer = { id = "c77c50d2-5947-4904-9f37-fa36182a71a9" },
  3. key = "secret",
  4. })
  5. if not entity then
  6. kong.log.err("Error when inserting keyauth credential: " .. err)
  7. return nil
  8. end

The returned entity, assuming no error happened will have auto-filled fields, like id and created_at.

Update an entity

  1. local entity, err, err_t = kong.db.<name>:update(primary_key, <values>)

Updates an existing entity, provided it can be found using the provided primary key and a set of values.

The returned entity will be the entity after the update takes place, or nil + an error message + an error table.

The following example modifies the key field of an existing credential given the credential’s id:

  1. local entity, err = kong.db.keyauth_credentials:update(
  2. { id = "2b6a2022-770a-49df-874d-11e2bf2634f5" },
  3. { key = "updated_secret" }
  4. )
  5. if not entity then
  6. kong.log.err("Error when updating keyauth credential: " .. err)
  7. return nil
  8. end

Notice how the syntax for specifying a primary key is similar to the one used to specify a foreign key.

Upsert an entity

  1. local entity, err, err_t = kong.db.<name>:upsert(primary_key, <values>)

upsert is a mixture of insert and update:

  • When the provided primary_key identifies an existing entity, it works like update.
  • When the provided primary_key does not identify an existing entity, it works like insert

Given this code:

  1. local entity, err = kong.db.keyauth_credentials:upsert(
  2. { id = "2b6a2022-770a-49df-874d-11e2bf2634f5" },
  3. { consumer = { id = "a96145fb-d71e-4c88-8a5a-2c8b1947534c" } }
  4. )
  5. if not entity then
  6. kong.log.err("Error when upserting keyauth credential: " .. err)
  7. return nil
  8. end

Two things can happen:

  • If a credential with id 2b6a2022-770a-49df-874d-11e2bf2634f5 exists, then this code will attempt to set its Consumer to the provided one.
  • If the credential does not exist, then this code is attempting to create a new credential, with the given id and Consumer.

Delete an entity

  1. local ok, err, err_t = kong.db.<name>:delete(primary_key)

Attempts to delete the entity identified by primary_key. It returns true if the entity doesn’t exist after calling this method, or nil + error + error table if an error is detected.

Notice that calling delete will succeed if the entity didn’t exist before calling it. This is for performance reasons - we want to avoid doing a read-before-delete if we can avoid it. If you want to do this check, you must do it manually, by checking with select before invoking delete.

Example:

  1. local ok, err = kong.db.keyauth_credentials:delete({
  2. id = "2b6a2022-770a-49df-874d-11e2bf2634f5"
  3. })
  4. if not ok then
  5. kong.log.err("Error when deleting keyauth credential: " .. err)
  6. return nil
  7. end

Cache custom entities

Sometimes custom entities are required on every request/response, which in turn triggers a query on the datastore every time. This is very inefficient because querying the datastore adds latency and slows the request/response down, and the resulting increased load on the datastore could affect the datastore performance itself and, in turn, other Kong nodes.

When a custom entity is required on every request/response it is good practice to cache it in-memory by leveraging the in-memory cache API provided by Kong.

The next chapter will focus on caching custom entities, and invalidating them when they change in the datastore: Caching custom entities.


Next: Caching custom entities ›