API reference

TimescaleDB API Reference

Command List (A-Z)

Hypertable management

add_dimension()

Add an additional partitioning dimension to a TimescaleDB hypertable. The column selected as the dimension can either use interval partitioning (e.g., for a second time partition) or hash partitioning.

WARNING:The add_dimension command can only be executed after a table has been converted to a hypertable (via create_hypertable), but must similarly be run only on an empty hypertable.

Space partitions: The use of additional partitioning is a very specialized use case. Most users will not need to use it.

Space partitions use hashing: Every distinct item is hashed to one of N buckets. Remember that we are already using (flexible) time intervals to manage chunk sizes; the main purpose of space partitioning is to enable parallel I/O to the same time interval.

Parallel I/O can benefit in two scenarios: (a) two or more concurrent queries should be able to read from different disks in parallel, or (b) a single query should be able to use query parallelization to read from multiple disks in parallel.

Thus, users looking for parallel I/O have two options:

  1. Use a RAID setup across multiple physical disks, and expose a single logical disk to the hypertable (i.e., via a single tablespace).

  2. For each physical disk, add a separate tablespace to the database. TimescaleDB allows you to actually add multiple tablespaces to a single hypertable (although under the covers, a hypertable’s chunks are spread across the tablespaces associated with that hypertable).

We recommend a RAID setup when possible, as it supports both forms of parallelization described above (i.e., separate queries to separate disks, single query to multiple disks in parallel). The multiple tablespace approach only supports the former. With a RAID setup, no spatial partitioning is required.

That said, when using space partitions, we recommend using 1 space partition per disk.

TimescaleDB does not benefit from a very large number of space partitions (such as the number of unique items you expect in partition field). A very large number of such partitions leads both to poorer per-partition load balancing (the mapping of items to partitions using hashing), as well as much increased planning latency for some types of queries.

Required Arguments

NameDescription
main_tableIdentifier of hypertable to add the dimension to.
column_nameName of the column to partition by.

Optional Arguments

NameDescription
number_partitionsNumber of hash partitions to use on column_name. Must be > 0.
chunk_time_intervalInterval that each chunk covers. Must be > 0.
partitioning_funcThe function to use for calculating a value’s partition (see create_hypertable instructions).
if_not_existsSet to true to avoid throwing an error if a dimension for the column already exists. A notice is issued instead. Defaults to false.

Returns

ColumnDescription
dimension_idID of the dimension in the TimescaleDB internal catalog.
schema_nameSchema name of the hypertable.
table_nameTable name of the hypertable.
column_nameColumn name of the column to partition by.
createdTrue if the dimension was added, false when if_not_exists is true and no dimension was added.

When executing this function, either number_partitions or chunk_time_interval must be supplied, which will dictate if the dimension will use hash or interval partitioning.

The chunk_time_interval should be specified as follows:

  • If the column to be partitioned is a TIMESTAMP, TIMESTAMPTZ, or DATE, this length should be specified either as an INTERVAL type or an integer value in microseconds.

  • If the column is some other integer type, this length should be an integer that reflects the column’s underlying semantics (e.g., the chunk_time_interval should be given in milliseconds if this column is the number of milliseconds since the UNIX epoch).

WARNING:Supporting more than one additional dimension is currently experimental. For any production environments, users are recommended to use at most one “space” dimension (in addition to the required time dimension specified in create_hypertable).

Sample Usage

First convert table conditions to hypertable with just time partitioning on column time, then add an additional partition key on location with four partitions:

  1. SELECT create_hypertable('conditions', 'time');
  2. SELECT add_dimension('conditions', 'location', number_partitions => 4);

Convert table conditions to hypertable with time partitioning on time and space partitioning (2 partitions) on location, then add two additional dimensions.

  1. SELECT create_hypertable('conditions', 'time', 'location', 2);
  2. SELECT add_dimension('conditions', 'time_received', chunk_time_interval => INTERVAL '1 day');
  3. SELECT add_dimension('conditions', 'device_id', number_partitions => 2);
  4. SELECT add_dimension('conditions', 'device_id', number_partitions => 2, if_not_exists => true);

attach_tablespace()

Attach a tablespace to a hypertable and use it to store chunks. A tablespace is a directory on the filesystem that allows control over where individual tables and indexes are stored on the filesystem. A common use case is to create a tablespace for a particular storage disk, allowing tables to be stored there. Please review the standard PostgreSQL documentation for more information on tablespaces.

TimescaleDB can manage a set of tablespaces for each hypertable, automatically spreading chunks across the set of tablespaces attached to a hypertable. If a hypertable is hash partitioned, TimescaleDB will try to place chunks that belong to the same partition in the same tablespace. Changing the set of tablespaces attached to a hypertable may also change the placement behavior. A hypertable with no attached tablespaces will have its chunks placed in the database’s default tablespace.

Required Arguments

NameDescription
tablespaceName of the tablespace to attach.
hypertableIdentifier of hypertable to attach the tablespace to.

Tablespaces need to be created before being attached to a hypertable. Once created, tablespaces can be attached to multiple hypertables simultaneously to share the underlying disk storage. Associating a regular table with a tablespace using the TABLESPACE option to CREATE TABLE, prior to calling create_hypertable, will have the same effect as calling attach_tablespace immediately following create_hypertable.

Optional Arguments

NameDescription
if_not_attachedSet to true to avoid throwing an error if the tablespace is already attached to the table. A notice is issued instead. Defaults to false.

Sample Usage

Attach the tablespace disk1 to the hypertable conditions:

  1. SELECT attach_tablespace('disk1', 'conditions');
  2. SELECT attach_tablespace('disk2', 'conditions', if_not_attached => true);

WARNING:The management of tablespaces on hypertables is currently an experimental feature.


create_hypertable()

Creates a TimescaleDB hypertable from a PostgreSQL table (replacing the latter), partitioned on time and with the option to partition on one or more other columns (i.e., space). All actions, such as ALTER TABLE, SELECT, etc., still work on the resulting hypertable.

Required Arguments

NameDescription
main_tableIdentifier of table to convert to hypertable
time_column_nameName of the column containing time values as well as the primary column to partition by.

Optional Arguments

NameDescription
partitioning_columnName of an additional column to partition by. If provided, the number_partitions argument must also be provided.
number_partitionsNumber of hash partitions to use for partitioning_column. Must be > 0.
chunk_time_intervalInterval in event time that each chunk covers. Must be > 0. As of TimescaleDB v0.11.0, default is 7 days. For previous versions, default is 1 month.
create_default_indexesBoolean whether to create default indexes on time/partitioning columns. Default is TRUE.
if_not_existsBoolean whether to print warning if table already converted to hypertable or raise exception. Default is FALSE.
partitioning_funcThe function to use for calculating a value’s partition.
associated_schema_nameName of the schema for internal hypertable tables. Default is “_timescaledb_internal”.
associated_table_prefixPrefix for internal hypertable chunk names. Default is “_hyper”.
migrate_dataSet to true to migrate any existing main_table data to chunks in the new hypertable. A non-empty table will generate an error without this option. Note that, for large tables, the migration might take a long time. Defaults to false.
time_partitioning_funcFunction to convert incompatible primary time column values to compatible ones. The function must be IMMUTABLE.

Returns

ColumnDescription
hypertable_idID of the hypertable in TimescaleDB.
schema_nameSchema name of the table converted to hypertable.
table_nameTable name of the table converted to hypertable.
createdTrue if the hypertable was created, false when if_not_exists is true and no hypertable was created.

TIP:If you use SELECT * FROM create_hypertable(...) you will get the return value formatted as a table with column headings.

WARNING:The use of the migrate_data argument to convert a non-empty table can lock the table for a significant amount of time, depending on how much data is in the table. It can also run into deadlock if foreign key constraints exist to other tables.

If you would like finer control over index formation and other aspects of your hypertable, follow these migration instructions instead.

When converting a normal SQL table to a hypertable, pay attention to how you handle constraints. A hypertable can contain foreign keys to normal SQL table columns, but the reverse is not allowed. UNIQUE and PRIMARY constraints must include the partitioning key.

The deadlock is likely to happen when concurrent transactions simultaneously try to insert data into tables that are referenced in the foreign key constraints and into the converting table itself. The deadlock can be prevented by manually obtaining SHARE ROW EXCLUSIVE lock on the referenced tables before calling create_hypertable in the same transaction, see PostgreSQL documentation for the syntax.

Units

The ‘time’ column supports the following data types:

Types
Timestamp (TIMESTAMP, TIMESTAMPTZ)
DATE
Integer (SMALLINT, INT, BIGINT)

TIP:The type flexibility of the ‘time’ column allows the use of non-time-based values as the primary chunk partitioning column, as long as those values can increment.

TIP:For incompatible data types (e.g. jsonb) you can specify a function to the time_partitioning_func argument which can extract a compatible data type

The units of chunk_time_interval should be set as follows:

  • For time columns having timestamp or DATE types, the chunk_time_interval should be specified either as an interval type or an integral value in microseconds.

  • For integer types, the chunk_time_interval must be set explicitly, as the database does not otherwise understand the semantics of what each integer value represents (a second, millisecond, nanosecond, etc.). So if your time column is the number of milliseconds since the UNIX epoch, and you wish to each chunk to cover 1 day, you should specify chunk_time_interval => 86400000.

In case of hash partitioning (i.e., number_partitions is greater than zero), it is possible to optionally specify a custom partitioning function. If no custom partitioning function is specified, the default partitioning function is used. The default partitioning function calls PostgreSQL’s internal hash function for the given type, if one exists. Thus, a custom partitioning function can be used for value types that do not have a native PostgreSQL hash function. A partitioning function should take a single anyelement type argument and return a positive integer hash value. Note that this hash value is not a partition ID, but rather the inserted value’s position in the dimension’s key space, which is then divided across the partitions.

TIP:The time column in create_hypertable must be defined as NOT NULL. If this is not already specified on table creation, create_hypertable will automatically add this constraint on the table when it is executed.

Sample Usage

Convert table conditions to hypertable with just time partitioning on column time:

  1. SELECT create_hypertable('conditions', 'time');

Convert table conditions to hypertable, setting chunk_time_interval to 24 hours.

  1. SELECT create_hypertable('conditions', 'time', chunk_time_interval => 86400000000);
  2. SELECT create_hypertable('conditions', 'time', chunk_time_interval => INTERVAL '1 day');

Convert table conditions to hypertable with time partitioning on time and space partitioning (4 partitions) on location:

  1. SELECT create_hypertable('conditions', 'time', 'location', 4);

The same as above, but using a custom partitioning function:

  1. SELECT create_hypertable('conditions', 'time', 'location', 4, partitioning_func => 'location_hash');

Convert table conditions to hypertable. Do not raise a warning if conditions is already a hypertable:

  1. SELECT create_hypertable('conditions', 'time', if_not_exists => TRUE);

Time partition table measurements on a composite column type report using a time partitioning function: Requires an immutable function that can convert the column value into a supported column value:

  1. CREATE TYPE report AS (reported timestamp with time zone, contents jsonb);
  2. CREATE FUNCTION report_reported(report)
  3. RETURNS timestamptz
  4. LANGUAGE SQL
  5. IMMUTABLE AS
  6. 'SELECT $1.reported';
  7. SELECT create_hypertable('measurements', 'report', time_partitioning_func => 'report_reported');

Time partition table events, on a column type jsonb (event), which has a top level key (started) containing an ISO 8601 formatted timestamp:

  1. CREATE FUNCTION event_started(jsonb)
  2. RETURNS timestamptz
  3. LANGUAGE SQL
  4. IMMUTABLE AS
  5. $func$SELECT ($1->>'started')::timestamptz$func$;
  6. SELECT create_hypertable('events', 'event', time_partitioning_func => 'event_started');

Best Practices

One of the most common questions users of TimescaleDB have revolves around configuring chunk_time_interval.

Time intervals: The current release of TimescaleDB enables both the manual and automated adaption of its time intervals. With manually-set intervals, users should specify a chunk_time_interval when creating their hypertable (the default value is 1 week). The interval used for new chunks can be changed by calling set_chunk_time_interval().

The key property of choosing the time interval is that the chunk (including indexes) belonging to the most recent interval (or chunks if using space partitions) fit into memory. As such, we typically recommend setting the interval so that these chunk(s) comprise no more than 25% of main memory.

TIP:Make sure that you are planning for recent chunks from all active hypertables to fit into 25% of main memory, rather than 25% per hypertable.

To determine this, you roughly need to understand your data rate. If you are writing roughly 2GB of data per day and have 64GB of memory, setting the time interval to a week would be good. If you are writing 10GB per day on the same machine, setting the time interval to a day would be appropriate. This interval would also hold if data is loaded more in batches, e.g., you bulk load 70GB of data per week, with data corresponding to records from throughout the week.

While it’s generally safer to make chunks smaller rather than too large, setting intervals too small can lead to many chunks, which corresponds to increased planning latency for some types of queries.

TIP:One caveat is that the total chunk size is actually dependent on both the underlying data size and any indexes, so some care might be taken if you make heavy use of expensive index types (e.g., some PostGIS geospatial indexes). During testing, you might check your total chunk sizes via the chunk_relation_size function.

Space partitions: In most cases, it is advised for users not to use space partitions. The rare cases in which space partitions may be useful are described in the add_dimension section.


CREATE INDEX (Transaction Per Chunk)

  1. CREATE INDEX ... WITH (timescaledb.transaction_per_chunk, ...);

This option extends CREATE INDEX with the ability to use a separate transaction for each chunk it creates an index on, instead of using a single transaction for the entire hypertable. This allows INSERTs, and other operations to to be performed concurrently during most of the duration of the CREATE INDEX command. While the index is being created on an individual chunk, it functions as if a regular CREATE INDEX were called on that chunk, however other chunks are completely un-blocked.

TIP:This version of CREATE INDEX can be used as an alternative to CREATE INDEX CONCURRENTLY, which is not currently supported on hypertables.

WARNING:If the operation fails partway through, indexes may not be created on all hypertable chunks. If this occurs, the index on the root table of the hypertable will be marked as invalid (this can be seen by running \d+ on the hypertable). The index will still work, and will be created on new chunks, but if you wish to ensure all chunks have a copy of the index, drop and recreate it.

Sample Usage

Anonymous index

  1. CREATE INDEX ON conditions(time, device_id) WITH (timescaledb.transaction_per_chunk);

Other index methods

  1. CREATE INDEX ON conditions(time, location) USING brin
  2. WITH (timescaledb.transaction_per_chunk);

detach_tablespace()

Detach a tablespace from one or more hypertables. This only means that new chunks will not be placed on the detached tablespace. This is useful, for instance, when a tablespace is running low on disk space and one would like to prevent new chunks from being created in the tablespace. The detached tablespace itself and any existing chunks with data on it will remain unchanged and will continue to work as before, including being available for queries. Note that newly inserted data rows may still be inserted into an existing chunk on the detached tablespace since existing data is not cleared from a detached tablespace. A detached tablespace can be reattached if desired to once again be considered for chunk placement.

Required Arguments

NameDescription
tablespaceName of the tablespace to detach.

When giving only the tablespace name as argument, the given tablespace will be detached from all hypertables that the current role has the appropriate permissions for. Therefore, without proper permissions, the tablespace may still receive new chunks after this command is issued.

Optional Arguments

NameDescription
hypertableIdentifier of hypertable to detach a the tablespace from.
if_attachedSet to true to avoid throwing an error if the tablespace is not attached to the given table. A notice is issued instead. Defaults to false.

When specifying a specific hypertable, the tablespace will only be detached from the given hypertable and thus may remain attached to other hypertables.

Sample Usage

Detach the tablespace disk1 from the hypertable conditions:

  1. SELECT detach_tablespace('disk1', 'conditions');
  2. SELECT detach_tablespace('disk2', 'conditions', if_attached => true);

Detach the tablespace disk1 from all hypertables that the current user has permissions for:

  1. SELECT detach_tablespace('disk1');

detach_tablespaces()

Detach all tablespaces from a hypertable. After issuing this command on a hypertable, it will no longer have any tablespaces attached to it. New chunks will instead be placed in the database’s default tablespace.

Required Arguments

NameDescription
hypertableIdentifier of hypertable to detach a the tablespace from.

Sample Usage

Detach all tablespaces from the hypertable conditions:

  1. SELECT detach_tablespaces('conditions');

drop_chunks()

Removes data chunks whose time range falls completely before (or after) a specified time, operating either across all hypertables or for a specific one. Shows a list of the chunks that were dropped in the same style as the show_chunks function.

Chunks are defined by a certain start and end time. If older_than is specified, a chunk is dropped if its end time is older than the specified timestamp. Alternatively, if newer_than is specified, a chunk is dropped if its start time is newer than the specified timestamp. Note that, because chunks are removed if and only if their time range falls fully before (or after) the specified timestamp, the remaining data may still contain timestamps that are before (or after) the specified one.

Required Arguments

Function requires at least one of the following arguments. These arguments have the same semantics as the show_chunks function.

NameDescription
older_thanSpecification of cut-off point where any full chunks older than this timestamp should be removed.
table_nameHypertable or continuous aggregate from which to drop chunks.
newer_thanSpecification of cut-off point where any full chunks newer than this timestamp should be removed.

Optional Arguments

NameDescription
schema_nameSchema name of the hypertable from which to drop chunks. Defaults to public.
cascadeBoolean on whether to CASCADE the drop on chunks, therefore removing dependent objects on chunks to be removed. Defaults to FALSE.
cascade_to_materializationsSet to TRUE to also remove chunk data from any associated continuous aggregates. Set to FALSE to only drop raw chunks (while keeping data in the continuous aggregates). Defaults to NULL, which errors if continuous aggregates exist.

TIP:The table_name argument was optional in previous versions, but this is now deprecated: table_name should always be given.

The older_than and newer_than parameters can be specified in two ways:

  • interval type: The cut-off point is computed as now() - older_than and similarly now() - newer_than. An error will be returned if an INTERVAL is supplied and the time column is not one of a TIMESTAMP, TIMESTAMPTZ, or DATE.

  • timestamp, date, or integer type: The cut-off point is explicitly given as a TIMESTAMP / TIMESTAMPTZ / DATE or as a SMALLINT / INT / BIGINT. The choice of timestamp or integer must follow the type of the hypertable’s time column.

WARNING:When using just an interval type, the function assumes that you are are removing things in the past. If you want to remove data in the future (i.e., erroneous entries), use a timestamp.

When both arguments are used, the function returns the intersection of the resulting two ranges. For example, specifying newer_than => 4 months and older_than => 3 months will drop all full chunks that are between 3 and 4 months old. Similarly, specifying newer_than => '2017-01-01' and older_than => '2017-02-01' will drop all full chunks between ‘2017-01-01’ and ‘2017-02-01’. Specifying parameters that do not result in an overlapping intersection between two ranges will result in an error.

TIP:By default, calling drop_chunks on a table that has a continuous aggregate will throw an error. This can be resolved by setting cascade_to_materializations to TRUE, which will cause the corresponding aggregated data to also be dropped.

Sample Usage

Drop all chunks older than 3 months ago from hypertable conditions:

  1. SELECT drop_chunks(INTERVAL '3 months', 'conditions');

Example output:

  1. drop_chunks
  2. ----------------------------------------
  3. _timescaledb_internal._hyper_3_5_chunk
  4. _timescaledb_internal._hyper_3_6_chunk
  5. _timescaledb_internal._hyper_3_7_chunk
  6. _timescaledb_internal._hyper_3_8_chunk
  7. _timescaledb_internal._hyper_3_9_chunk
  8. (5 rows)

Drop all chunks more than 3 months in the future from hypertable conditions. This is useful for correcting data ingested with incorrect clocks:

  1. SELECT drop_chunks(newer_than => now() + INTERVAL '3 months', table_name => 'conditions');

Drop all chunks from hypertable conditions before 2017:

  1. SELECT drop_chunks(DATE '2017-01-01', 'conditions');

Drop all chunks from hypertable conditions before 2017, where time column is given in milliseconds from the UNIX epoch:

  1. SELECT drop_chunks(1483228800000, 'conditions');

Drop all chunks from hypertable conditions older than 3 months, including dependent objects (e.g., views):

  1. SELECT drop_chunks(INTERVAL '3 months', 'conditions', cascade => TRUE);

Drop all chunks older than 3 months ago and newer than 4 months ago from hypertable conditions:

  1. SELECT drop_chunks(older_than => INTERVAL '3 months', newer_than => INTERVAL '4 months', table_name => 'conditions')

Drop all chunks older than 3 months, and delete this data from any continuous aggregates based on it:

  1. SELECT drop_chunks(INTERVAL '3 months', 'conditions', cascade_to_materializations => true);

set_chunk_time_interval()

Sets the chunk_time_interval on a hypertable. The new interval is used when new chunks are created but the time intervals on existing chunks are not affected.

Required Arguments

NameDescription
main_tableIdentifier of hypertable to update interval for.
chunk_time_intervalInterval in event time that each new chunk covers. Must be > 0.

Optional Arguments

NameDescription
dimension_nameThe name of the time dimension to set the number of partitions for. Only used when hypertable has multiple time dimensions.

The valid types for the chunk_time_interval depend on the type of hypertable time column:

  • TIMESTAMP, TIMESTAMPTZ, DATE: The specified chunk_time_interval should be given either as an INTERVAL type (INTERVAL '1 day') or as an integer or bigint value (representing some number of microseconds).

  • INTEGER: The specified chunk_time_interval should be an integer (smallint, int, bigint) value and represent the underlying semantics of the hypertable’s time column, e.g., given in milliseconds if the time column is expressed in milliseconds (see create_hypertable instructions).

Sample Usage

For a TIMESTAMP column, set chunk_time_interval to 24 hours.

  1. SELECT set_chunk_time_interval('conditions', INTERVAL '24 hours');
  2. SELECT set_chunk_time_interval('conditions', 86400000000);

For a time column expressed as the number of milliseconds since the UNIX epoch, set chunk_time_interval to 24 hours.

  1. SELECT set_chunk_time_interval('conditions', 86400000);

set_number_partitions()

Sets the number of partitions (slices) of a space dimension on a hypertable. The new partitioning only affects new chunks.

Required Arguments

NameDescription
main_tableIdentifier of hypertable to update the number of partitions for.
number_partitionsThe new number of partitions for the dimension. Must be greater than 0 and less than 32,768.

Optional Arguments

NameDescription
dimension_nameThe name of the space dimension to set the number of partitions for.

The dimension_name needs to be explicitly specified only if the hypertable has more than one space dimension. An error will be thrown otherwise.

Sample Usage

For a table with a single space dimension:

  1. SELECT set_number_partitions('conditions', 2);

For a table with more than one space dimension:

  1. SELECT set_number_partitions('conditions', 2, 'device_id');

set_integer_now_func()

This function is only relevant for hypertables with integer (as opposed to TIMESTAMP/TIMESTAMPTZ/DATE) time values. For such hypertables, it sets a function that returns the now() value (current time) in the units of the time column. This is necessary for running some policies on integer-based tables. In particular, many policies only apply to chunks of a certain age and a function that returns the current time is necessary to determine the age of a chunk.

Required Arguments

NameDescription
main_table(REGCLASS) Identifier of hypertable to set the integer now function for .
integer_now_func(REGPROC) A function that returns the current time value in the same units as the time column.

Optional Arguments

NameDescription
replace_if_exists(BOOLEAN) Whether to override the function if one is already set. Defaults to false.

Sample Usage

To set the integer now function for a hypertable with a time column in unix time (number of seconds since the unix epoch, UTC).

  1. CREATE OR REPLACE FUNCTION unix_now() returns BIGINT LANGUAGE SQL STABLE as $$ SELECT extract(epoch from now())::BIGINT $$;
  2. SELECT set_integer_now_func('test_table_bigint', 'unix_now');

show_chunks()

Get list of chunks associated with hypertables.

Optional Arguments

Function accepts the following arguments. These arguments have the same semantics as the drop_chunks function.

NameDescription
hypertableHypertable name from which to select chunks. If not supplied, all chunks are shown.
older_thanSpecification of cut-off point where any full chunks older than this timestamp should be shown.
newer_thanSpecification of cut-off point where any full chunks newer than this timestamp should be shown.

The older_than and newer_than parameters can be specified in two ways:

  • interval type: The cut-off point is computed as now() - older_than and similarly now() - newer_than. An error will be returned if an INTERVAL is supplied and the time column is not one of a TIMESTAMP, TIMESTAMPTZ, or DATE.

  • timestamp, date, or integer type: The cut-off point is explicitly given as a TIMESTAMP / TIMESTAMPTZ / DATE or as a SMALLINT / INT / BIGINT. The choice of timestamp or integer must follow the type of the hypertable’s time column.

When both arguments are used, the function returns the intersection of the resulting two ranges. For example, specifying newer_than => 4 months and older_than => 3 months will show all full chunks that are between 3 and 4 months old. Similarly, specifying newer_than => '2017-01-01' and older_than => '2017-02-01' will show all full chunks between ‘2017-01-01’ and ‘2017-02-01’. Specifying parameters that do not result in an overlapping intersection between two ranges will result in an error.

Sample Usage

Get list of all chunks. Returns 0 if there are no hypertables:

  1. SELECT show_chunks();

The expected output:

  1. show_chunks
  2. ---------------------------------------
  3. _timescaledb_internal._hyper_1_10_chunk
  4. _timescaledb_internal._hyper_1_11_chunk
  5. _timescaledb_internal._hyper_1_12_chunk
  6. _timescaledb_internal._hyper_1_13_chunk
  7. _timescaledb_internal._hyper_1_14_chunk
  8. _timescaledb_internal._hyper_1_15_chunk
  9. _timescaledb_internal._hyper_1_16_chunk
  10. _timescaledb_internal._hyper_1_17_chunk
  11. _timescaledb_internal._hyper_1_18_chunk

Get list of all chunks associated with a table:

  1. SELECT show_chunks('conditions');

Get all chunks older than 3 months:

  1. SELECT show_chunks(older_than => INTERVAL '3 months');

Get all chunks more than 3 months in the future. This is useful for showing data ingested with incorrect clocks:

  1. SELECT show_chunks(newer_than => now() + INTERVAL '3 months');

Get all chunks from hypertable conditions older than 3 months:

  1. SELECT show_chunks('conditions', older_than => INTERVAL '3 months');

Get all chunks from hypertable conditions before 2017:

  1. SELECT show_chunks('conditions', older_than => DATE '2017-01-01');

Get all chunks newer than 3 months:

  1. SELECT show_chunks(newer_than => INTERVAL '3 months');

Get all chunks older than 3 months and newer than 4 months:

  1. SELECT show_chunks(older_than => INTERVAL '3 months', newer_than => INTERVAL '4 months');

reorder_chunk() Community

Reorder a single chunk’s heap to follow the order of an index. This function acts similarly to the PostgreSQL CLUSTER command , however it uses lower lock levels so that, unlike with the CLUSTER command, the chunk and hypertable are able to be read for most of the process. It does use a bit more disk space during the operation.

This command can be particularly useful when data is often queried in an order different from that in which it was originally inserted. For example, data is commonly inserted into a hypertable in loose time order (e.g., many devices concurrently sending their current state), but one might typically query the hypertable about a specific device. In such cases, reordering a chunk using an index on (device_id, time) can lead to significant performance improvement for these types of queries.

One can call this function directly on individual chunks of a hypertable, but using add_reorder_policy is often much more convenient.

Required Arguments

NameDescription
chunk(REGCLASS) Name of the chunk to reorder.

Optional Arguments

NameDescription
index(REGCLASS) The name of the index (on either the hypertable or chunk) to order by.
verbose(BOOLEAN) Setting to true will display messages about the progress of the reorder command. Defaults to false.

Returns

This function returns void.

Sample Usage

  1. SELECT reorder_chunk('_timescaledb_internal._hyper_1_10_chunk', 'conditions_device_id_time_idx');

runs a reorder on the _timescaledb_internal._hyper_1_10_chunk chunk using the conditions_device_id_time_idx index.


move_chunk() Enterprise

TimescaleDB allows users to move data (and indexes) to alternative tablespaces. This allows the user the ability to move data to more cost effective storage as it ages. This function acts like the combination of the PostgreSQL CLUSTER command and the PostgreSQL ALTER TABLE…SET TABLESPACE command. However, it uses lower lock levels so that, unlike with these PostgreSQL commands, the chunk and hypertable are able to be read for most of the process. It does use a bit more disk space during the operation.

Required Arguments

NameDescription
chunk(REGCLASS) Name of chunk to be moved.
destination_tablespace(Name) Target tablespace for chunk you are moving.
index_destination_tablespace(Name) Target tablespace for index associated with the chunk you are moving.

Optional Arguments

NameDescription
reorder_index(REGCLASS) The name of the index (on either the hypertable or chunk) to order by.
verbose(BOOLEAN) Setting to true will display messages about the progress of the move_chunk command. Defaults to false.

Sample Usage

  1. SELECT move_chunk(
  2. chunk => '_timescaledb_internal._hyper_1_4_chunk',
  3. destination_tablespace => 'tablespace_2',
  4. index_destination_tablespace => 'tablespace_3',
  5. reorder_index => 'conditions_device_id_time_idx',
  6. verbose => TRUE
  7. );

Compression Community

We highly recommend reading the blog post and tutorial about compression before trying to set it up for the first time.

Setting up compression on TimescaleDB requires users to first configure the hypertable for compression and then set up a policy for when to compress chunks.

Advanced usage of compression alows users to compress chunks manually, instead of automatically as they age.

WARNING:Compression is not available when using TimescaleDB on PostgreSQL 9.6.

Restrictions

The current version does not support altering or inserting data into compressed chunks. The data can be queried without any modifications, however if you need to backfill or update data in a compressed chunk you will need to decompress the chunk(s) first.

Associated commands

ALTER TABLE (Compression) Community

‘ALTER TABLE’ statement is used to turn on compression and set compression options.

The syntax is:

  1. ALTER TABLE <table_name> SET (timescaledb.compress, timescaledb.compress_orderby = '<column_name> [ASC | DESC] [ NULLS { FIRST | LAST } ] [, ...]',
  2. timescaledb.compress_segmentby = '<column_name> [, ...]'
  3. );

Required Options

NameDescription
timescaledb.compressBoolean to enable compression

Other Options

NameDescription
timescaledb.compress_orderbyOrder used by compression, specified in the same way as the ORDER BY clause in a SELECT query. The default is the descending order of the hypertable’s time column.
timescaledb.compress_segmentbyColumn list on which to key the compressed segments. An identifier representing the source of the data such as device_id or tags_id is usually a good candidate. The default is no segment by columns.

Parameters

NameDescription
table_nameName of the hypertable that will support compression
column_nameName of the column used to order by and/or segment by

Sample Usage

Configure a hypertable that ingests device data to use compression.

  1. ALTER TABLE metrics SET (timescaledb.compress, timescaledb.compress_orderby = 'time DESC', timescaledb.compress_segmentby = 'device_id');

add_compress_chunks_policy() Community

Allows you to set a policy by which the system will compress a chunk automatically in the background after it reaches a given age.

Required Arguments

NameDescription
table_name(REGCLASS) Name of the table that the policy will act on.
time_interval(INTERVAL or integer) The age after which the policy job will compress chunks.

The time_interval parameter should be specifified differently depending on the type of the time column of the hypertable:

  • For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the time interval should be an INTERVAL type
  • For hypertables with integer-based timestamps: the time interval should be an integer type (this requires the integer_now_func to be set).

Sample Usage

Add a policy to compress chunks older than 60 days on the ‘cpu’ hypertable.

  1. SELECT add_compress_chunks_policy('cpu', INTERVAL '60d');

Add a compress chunks policy to a hypertable with an integer-based time column:

  1. SELECT add_compress_chunks_policy('table_with_bigint_time', BIGINT '600000');

remove_compress_chunks_policy() Community

If you need to remove the compression policy. To re-start policy basd compression again you will need to re-add the policy.

Required Arguments

NameDescription
table_name(REGCLASS) Name of the hypertable the policy should be removed from.

Sample Usage

Remove the compression policy from the ‘cpu’ table:

  1. SELECT remove_compress_chunks_policy('cpu');

compress_chunk() Community

The compress_chunk function is used to compress a specific chunk. This is most often used instead of the add_compress_chunks_policy function, when a user wants more control over the scheduling of compression. For most users, we suggest using the policy framework instead.

TIP:You can get a list of chunks belonging to a hypertable using the show_chunks function.

Required Arguments

NameDescription
chunk_name(REGCLASS) Name of the chunck to be compressed

Optional Arguments

NameDescription
if_not_compressed(BOOLEAN) Setting to true will skip chunks that are already compressed. Defaults to false.

Sample Usage

Compress a single chunk.

  1. SELECT compress_chunk('_timescaledb_internal._hyper_1_2_chunk');

decompress_chunk() Community

If you need to modify or add data to a chunk that has already been compressed, you will need to decompress the chunk first. This is especially useful for backfilling old data.

TIP:Prior to decompressing chunks for the purpose of data backfill or updating you should first stop any compression policy that is active on the hypertable you plan to perform this operation on. Once the update and/or backfill is complete simply turn the policy back on and the system will recompress your chucks.

Required Arguments

NameDescription
chunk_name(REGCLASS) Name of the chunk to be decompressed.

Optional Arguments

NameDescription
if_compressed(BOOLEAN) Setting to true will skip chunks that are not compressed. Defaults to false.

Sample Usage

Decompress a single chunk

  1. SELECT decompress_chunk('_timescaledb_internal._hyper_2_2_chunk');

Continuous Aggregates Community

TimescaleDB allows users the ability to automatically recompute aggregates at predefined intervals and materialize the results. This is suitable for frequently used queries. For a more detailed discussion of this capability, please see using TimescaleDB Continuous Aggregates.

CREATE VIEW (Continuous Aggregate) Community

CREATE VIEW statement is used to create continuous aggregates.

The syntax is:

  1. CREATE VIEW <view_name> [ ( column_name [, ...] ) ]
  2. WITH ( timescaledb.continuous [, timescaledb.<option> = <value> ] )
  3. AS
  4. <select_query>

<select_query> is of the form :

  1. SELECT <grouping_exprs>, <aggregate_functions>
  2. FROM <hypertable>
  3. [WHERE ... ]
  4. GROUP BY time_bucket( <const_value>, <partition_col_of_hypertable> ),
  5. [ optional grouping exprs>]
  6. [HAVING ...]

Note that continuous aggregates have some limitations of what types of queries they can support, described in more length below. For example, the FROM clause must provide only one hypertable, i.e., no joins, views or subqueries are supported. The GROUP BY clause must include a time bucket on the hypertable’s time column, and all aggregates must be parallelizable.

Parameters

NameDescription
<view_name>Name (optionally schema-qualified) of continuous aggregate view to be created.
<column_name>Optional list of names to be used for columns of the view. If not given, the column names are deduced from the query.
WITH clauseThis clause specifies options for the continuous aggregate view.
<select_query>A SELECT query that uses the specified syntax.

Required WITH clause options

Name
timescaledb.continuous
DescriptionTypeDefault
If timescaledb.continuous is not specified, then this is a regular PostgresSQL view.BOOLEAN

Optional WITH clause options

Name
timescaledb.refresh_lag
DescriptionTypeDefault
Refresh lag controls the amount by which the materialization will lag behind the current time. The continuous aggregate view lags behind by bucket_width + refresh_lag value. refresh_lag can be set to positive and negative values.Same datatype as the bucket_width argument from the time_bucket expression.The default value is twice the bucket width (as specified by the time_bucket expression).
Name
timescaledb.refresh_interval
DescriptionTypeDefault
Refresh interval controls how often the background materializer is run. Note that if refresh_lag is set to -<bucket_width>, the continuous aggregate will run whenever new data is received, regardless of what the refresh_interval value is.INTERVALBy default, this is set to twice the bucket width (if the datatype of the bucket_width argument from the time_bucket expression is an INTERVAL), otherwise it is set to 12 hours.
Name
timescaledb.materialized_only
DescriptionTypeDefault
Return only materialized data when querying the continuous aggregate view.BOOLEANfalse
Name
timescaledb.max_interval_per_job
DescriptionTypeDefault
Max interval per job specifies the amount of data processed by the background materializer job when the continuous aggregate is updated.Same datatype as the bucket_width argument from the time_bucket expression.The default value is 20 * bucket width.
Name
timescaledb.create_group_indexes
DescriptionTypeDefault
Create indexes on the materialization table for the group by columns (specified by the GROUP BY clause of the SELECT query).BOOLEANIndexes are created by default for every group by expression + time_bucket expression pair.
Name
timescaledb.ignore_invalidation_older_than
DescriptionTypeDefault
Time interval after which invalidations are ignored.Same datatype as the bucket_width argument from the time_bucket expression.By default all invalidations are processed.

TIP:Say, the continuous aggregate uses time_bucket(INTERVAL ‘2h’, time_column) and we want to keep the view up to date with the data. We can do this by modifying the refresh_lag setting. Set refresh_lag to -2h. E.g. ALTER VIEW contview set (timescaledb.refresh_lag = '-2h'); Please refer to the caveats.

Restrictions

  • SELECT query should be of the form specified in the syntax above.
  • The hypertable used in the SELECT may not have row-level-security policies enabled.
  • GROUP BY clause must include a time_bucket expression. The time_bucket expression must use the time dimension column of the hypertable.
  • The SELECT query should be of the form specified in the syntax above, which is discussed in the following items.
  • Only a single hypertable can be specified in the FROM clause of the SELECT query. This means that including more hypertables, joins, tables, views, subqueries is not supported.
  • The hypertable used in the SELECT may not have row-level-security policies enabled.
  • The GROUP BY clause must include a time_bucket expression. The time_bucket expression must use the time dimension column of the hypertable.
  • time_bucket_gapfill is not allowed in continuous aggs, but may be run in a SELECT from the continuous aggregate view.
  • In general, aggregates which can be parallelized by PostgreSQL are allowed in the view definition, this includes most aggregates distributed with PostgreSQL. Aggregates with ORDER BY, DISTINCT and FILTER clauses are not permitted.
  • All functions and their arguments included in SELECT, GROUP BY and HAVING clauses must be immutable.
  • Queries with ORDER BY are disallowed.
  • The view is not allowed to be a security barrier view.
  • Window functions cannot be used in conjunction with continuous aggregates.

TIP:You can find the settings for continuous aggregates and statistics in timescaledb_information views.

Sample Usage

Create a continuous aggregate view.

  1. CREATE VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
  2. WITH ( timescaledb.continuous,
  3. timescaledb.refresh_lag = '5 hours',
  4. timescaledb.refresh_interval = '1h' )
  5. AS
  6. SELECT time_bucket('1day', timec), min(location), sum(temperature), sum(humidity)
  7. FROM conditions
  8. GROUP BY time_bucket('1day', timec)

Add additional continuous aggregates on top of the same raw hypertable.

  1. CREATE VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
  2. WITH ( timescaledb.continuous,
  3. timescaledb.refresh_lag = '5 hours',
  4. timescaledb.refresh_interval = '1h' )
  5. AS
  6. SELECT time_bucket('30day', timec), min(location), sum(temperature), sum(humidity)
  7. FROM conditions
  8. GROUP BY time_bucket('30day', timec);

TIP:In order to keep the continuous aggregate up to date with incoming data, the refresh lag can be set to -<bucket_width>. Please note that by doing so, you will incur higher write amplification and incur performance penalties.

  1. CREATE VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
  2. WITH (timescaledb.continuous,
  3. timescaledb.refresh_lag = '-1h',
  4. timescaledb.refresh_interval = '30m')
  5. AS
  6. SELECT time_bucket('1h', timec), min(location), sum(temperature), sum(humidity)
  7. FROM conditions
  8. GROUP BY time_bucket('1h', timec);

ALTER VIEW (Continuous Aggregate) Community

ALTER VIEW statement can be used to modify some of the WITH clause options for the continuous aggregate view.

  1. ALTER VIEW <view_name> SET ( timescaledb.<option> = <value> [, ... ] )

Parameters

NameDescription
<view_name>Name (optionally schema-qualified) of continuous aggregate view to be created.

Sample Usage

Set the max interval processed by a materializer job (that updates the continuous aggregate) to 1 week.

  1. ALTER VIEW contagg_view SET (timescaledb.max_interval_per_job = '1 week');

Set the refresh lag to 1 hour, the refresh interval to 30 minutes and the max interval processed by a job to 1 week for the continuous aggregate.

  1. ALTER VIEW contagg_view SET (timescaledb.refresh_lag = '1h', timescaledb.max_interval_per_job = '1 week', timescaledb.refresh_interval = '30m');

TIP:Only WITH options can be modified using the ALTER statment. If you need to change any other parameters, drop the view and create a new one.


REFRESH MATERIALIZED VIEW (Continuous Aggregate) Community

The continuous aggregate view can be manually updated by using REFRESH MATERIALIZED VIEW statement. A background materializer job will run immediately and update the continuous aggregate.

  1. REFRESH MATERIALIZED VIEW <view_name>

Parameters

NameDescription
<view_name>Name (optionally schema-qualified) of continuous aggregate view to be created.

Sample Usage

Update the continuous aggregate view immediately.

  1. REFRESH MATERIALIZED VIEW contagg_view;

TIP:Note that max_interval_per_job and refresh_lag parameter settings are used by the materialization job when the REFRESH is run. So the materialization (of the continuous aggregate) does not necessarily include all the updates to the hypertable.


DROP VIEW (Continuous Aggregate) Community

Continuous aggregate views can be dropped using DROP VIEW statement.

This deletes the hypertable that stores the materialized data for the continuous aggregate; it does not affect the data in the underlying hypertable from which the continuous aggregate is derived (i.e., the raw data). The CASCADE parameter is required for this command.

  1. DROP VIEW <view_name> CASCADE;

Parameters

NameDescription
<view_name>Name (optionally schema-qualified) of continuous aggregate view to be created.

Sample Usage

Drop existing continuous aggregate.

  1. DROP VIEW contagg_view CASCADE;

WARNING:CASCADE will drop those objects that depend on the continuous aggregate, such as views that are built on top of the continuous aggregate view.


Automation policies Community

TimescaleDB includes an automation framework for allowing background tasks to run inside the database, controllable by user-supplied policies. These tasks currently include capabilities around data retention and data reordering for improving query performance.

The following functions allow the administrator to create/remove/alter policies that schedule administrative actions to take place on a hypertable. The actions are meant to implement data retention or perform tasks that will improve query performance on older chunks. Each policy is assigned a scheduled job which will be run in the background to enforce it.

add_drop_chunks_policy() Community

Create a policy to drop chunks older than a given interval of a particular hypertable or continuous aggregate on a schedule in the background. (See drop_chunks). This implements a data retention policy and will remove data on a schedule. Only one drop-chunks policy may exist per hypertable.

Required Arguments

NameDescription
table_name(REGCLASS) Name of the hypertable or continuous aggregate to create the policy for.
older_than(INTERVAL) Chunks fully older than this interval when the policy is run will be dropped

Optional Arguments

NameDescription
cascade(BOOLEAN) Set to true to drop objects dependent upon chunks being dropped. Defaults to false.
if_not_exists(BOOLEAN) Set to true to avoid throwing an error if the policy already exists. A notice is issued instead. Defaults to false.
cascade_to_materializations(BOOLEAN) Set to TRUE to also remove chunk data from any associated continuous aggregates. Set to FALSE to only drop raw chunks (while keeping data in the continuous aggregates). Defaults to NULL, which errors if continuous aggregates exist.

TIP:When dropping data from the raw hypertable while retaining data on a continuous aggregate, the older_than parameter to drop_chunks has to be longer than the timescaledb.ignore_invalidation_older_than parameter on the continuous aggregate. That is because we cannot process invalidations on data regions where the raw data has been dropped.

WARNING:If a drop chunks policy is setup which does not set cascade_to_materializations to either TRUE or FALSE on a hypertable that has a continuous aggregate, the policy will not drop any chunks.

Returns

ColumnDescription
job_id(INTEGER) TimescaleDB background job id created to implement this policy

Sample Usage

  1. SELECT add_drop_chunks_policy('conditions', INTERVAL '6 months');

creates a data retention policy to discard chunks greater than 6 months old.


remove_drop_chunks_policy() Community

Remove a policy to drop chunks of a particular hypertable.

Required Arguments

NameDescription
table_name(REGCLASS) Name of the hypertable or continuous aggregate to create the policy for.

Optional Arguments

NameDescription
if_exists(BOOLEAN) Set to true to avoid throwing an error if the policy does not exist. Defaults to false.

Sample Usage

  1. SELECT remove_drop_chunks_policy('conditions');

removes the existing data retention policy for the conditions table.


add_reorder_policy() Community

Create a policy to reorder chunks on a given hypertable index in the background. (See reorder_chunk). Only one reorder policy may exist per hypertable. Only chunks that are the 3rd from the most recent will be reordered to avoid reordering chunks that are still being inserted into.

TIP:Once a chunk has been reordered by the background worker it will not be reordered again. So if one were to insert significant amounts of data in to older chunks that have already been reordered, it might be necessary to manually re-run the reorder_chunk function on older chunks, or to drop and re-create the policy if many older chunks have been affected.

Required Arguments

NameDescription
hypertable(REGCLASS) Name of the hypertable to create the policy for.
index_name(NAME) Existing index by which to order rows on disk.

Optional Arguments

NameDescription
if_not_exists(BOOLEAN) Set to true to avoid throwing an error if the reorder_policy already exists. A notice is issued instead. Defaults to false.

Returns

ColumnDescription
job_id(INTEGER) TimescaleDB background job id created to implement this policy

Sample Usage

  1. SELECT add_reorder_policy('conditions', 'conditions_device_id_time_idx');

creates a policy to reorder completed chunks by the existing (device_id, time) index. (See reorder_chunk).


remove_reorder_policy() Community

Remove a policy to reorder a particular hypertable.

Required Arguments

NameDescription
hypertable(REGCLASS) Name of the hypertable from which to remove the policy.

Optional Arguments

NameDescription
if_exists(BOOLEAN) Set to true to avoid throwing an error if the reorder_policy does not exist. A notice is issued instead. Defaults to false.

Sample Usage

  1. SELECT remove_reorder_policy('conditions', if_exists => true);

removes the existing reorder policy for the conditions table if it exists.


alter_job_schedule() Community

Policy jobs are scheduled to run periodically via a job run in a background worker. You can change the schedule using alter_job_schedule. To alter an existing job, you must refer to it by job_id. The job_id which implements a given policy and its current schedule can be found in views in the timescaledb_information schema corresponding to different types of policies or in the general timescaledb_information.policy_stats view. This view additionally contains information about when each job was last run and other useful statistics for deciding what the new schedule should be.

TIP:Altering the schedule will only change the frequency at which the background worker checks the policy. If you need to change the data retention interval or reorder by a different index, you’ll need to remove the policy and add a new one.

Required Arguments

NameDescription
job_id(INTEGER) the id of the policy job being modified

Optional Arguments

NameDescription
schedule_interval(INTERVAL) The interval at which the job runs
max_runtime(INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries(INTEGER) The number of times the job will be retried should it fail
retry_period(INTERVAL) The amount of time the scheduler will wait between retries of the job on failure
if_exists(BOOLEAN) Set to true to avoid throwing an error if the job does not exist, a notice will be issued instead. Defaults to false.
next_start(TIMESTAMPTZ) The next time at which to run the job. The job can be paused by setting this value to ‘infinity’ (and restarted with a value of now()).

Returns

ColumnDescription
schedule_interval(INTERVAL) The interval at which the job runs
max_runtime(INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries(INTEGER) The number of times the job will be retried should it fail
retry_period(INTERVAL) The amount of time the scheduler will wait between retries of the job on failure

Sample Usage

  1. SELECT alter_job_schedule(job_id, schedule_interval => INTERVAL '2 days')
  2. FROM timescaledb_information.reorder_policies
  3. WHERE hypertable = 'conditions'::regclass;

reschedules the reorder policy job for the conditions table so that it runs every two days.

  1. SELECT alter_job_schedule(job_id, schedule_interval => INTERVAL '5 minutes')
  2. FROM timescaledb_information.continuous_aggregate_stats
  3. WHERE view_name = 'conditions_agg'::regclass;

reschedules the continuous aggregate job for the conditions_agg view so that it runs every five minutes.

  1. SELECT alter_job_schedule(1015, next_start => '2020-03-15 09:00:00.0+00');

reschedules continuous aggregate job 1015 so that the next execution of the job starts at the specified time (9:00:00 am on March 15, 2020). This same query could have simultaneously changed the schedule_interval or queried the timescaledb_information.continuous_aggregate_stats informational view to extract the job_id, as shown above.


Analytics

first()

The first aggregate allows you to get the value of one column as ordered by another. For example, first(temperature, time) will return the earliest temperature value based on time within an aggregate group.

Required Arguments

NameDescription
valueThe value to return (anyelement)
timeThe timestamp to use for comparison (TIMESTAMP/TIMESTAMPTZ or integer type)

Sample Usage

Get the earliest temperature by device_id:

  1. SELECT device_id, first(temp, time)
  2. FROM metrics
  3. GROUP BY device_id;

WARNING:The last and first commands do not use indexes, and instead perform a sequential scan through their groups. They are primarily used for ordered selection within a GROUP BY aggregate, and not as an alternative to an ORDER BY time DESC LIMIT 1 clause to find the latest value (which will use indexes).


histogram()

The histogram() function represents the distribution of a set of values as an array of equal-width buckets. It partitions the dataset into a specified number of buckets (nbuckets) ranging from the inputted min and max values.

The return value is an array containing nbuckets+2 buckets, with the middle nbuckets bins for values in the stated range, the first bucket at the head of the array for values under the lower min bound, and the last bucket for values greater than or equal to the max bound. Each bucket is inclusive on its lower bound, and exclusive on its upper bound. Therefore, values equal to the min are included in the bucket starting with min, but values equal to the max are in the last bucket.

Required Arguments

NameDescription
valueA set of values to partition into a histogram
minThe histogram’s lower bound used in bucketing (inclusive)
maxThe histogram’s upper bound used in bucketing (exclusive)
nbucketsThe integer value for the number of histogram buckets (partitions)

Sample Usage

A simple bucketing of device’s battery levels from the readings dataset:

  1. SELECT device_id, histogram(battery_level, 20, 60, 5)
  2. FROM readings
  3. GROUP BY device_id
  4. LIMIT 10;

The expected output:

  1. device_id | histogram
  2. ------------+------------------------------
  3. demo000000 | {0,0,0,7,215,206,572}
  4. demo000001 | {0,12,173,112,99,145,459}
  5. demo000002 | {0,0,187,167,68,229,349}
  6. demo000003 | {197,209,127,221,106,112,28}
  7. demo000004 | {0,0,0,0,0,39,961}
  8. demo000005 | {12,225,171,122,233,80,157}
  9. demo000006 | {0,78,176,170,8,40,528}
  10. demo000007 | {0,0,0,126,239,245,390}
  11. demo000008 | {0,0,311,345,116,228,0}
  12. demo000009 | {295,92,105,50,8,8,442}

interpolate() Community

The interpolate function does linear interpolation for missing values. It can only be used in an aggregation query with time_bucket_gapfill. The interpolate function call cannot be nested inside other function calls.

Required Arguments

NameDescription
valueThe value to interpolate (int2/int4/int8/float4/float8)

Optional Arguments

NameDescription
prevThe lookup expression for values before the gapfill time range (record)
nextThe lookup expression for values after the gapfill time range (record)

Because the interpolation function relies on having values before and after each bucketed period to compute the interpolated value, it might not have enough data to calculate the interpolation for the first and last time bucket if those buckets do not otherwise contain valid values. For example, the interpolation would require looking before this first time bucket period, yet the query’s outer time predicate WHERE time > … normally restricts the function to only evaluate values within this time range. Thus, the prev and next expression tell the function how to look for values outside of the range specified by the time predicate. These expressions will only be evaluated when no suitable value is returned by the outer query (i.e., the first and/or last bucket in the queried time range is empty). The returned record for prev and next needs to be a time, value tuple. The datatype of time needs to be the same as the time datatype in the time_bucket_gapfill call. The datatype of value needs to be the same as the value datatype of the interpolate call.

Sample Usage

Get the temperature every day for each device over the last week interpolating for missing readings:

  1. SELECT
  2. time_bucket_gapfill('1 day', time, now() - INTERVAL '1 week', now()) AS day,
  3. device_id,
  4. avg(temperature) AS value,
  5. interpolate(avg(temperature))
  6. FROM metrics
  7. WHERE time > now () - INTERVAL '1 week'
  8. GROUP BY day, device_id
  9. ORDER BY day;
  10. day | device_id | value | interpolate
  11. ------------------------+-----------+-------+-------------
  12. 2019-01-10 01:00:00+01 | 1 | |
  13. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  14. 2019-01-12 01:00:00+01 | 1 | | 6.0
  15. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  16. 2019-01-14 01:00:00+01 | 1 | | 7.5
  17. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  18. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
  19. (7 row)

Get the average temperature every day for each device over the last 7 days interpolating for missing readings with lookup queries for values before and after the gapfill time range:

  1. SELECT
  2. time_bucket_gapfill('1 day', time, now() - INTERVAL '1 week', now()) AS day,
  3. device_id,
  4. avg(value) AS value,
  5. interpolate(avg(temperature),
  6. (SELECT (time,temperature) FROM metrics m2 WHERE m2.time < now() - INTERVAL '1 week' AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1),
  7. (SELECT (time,temperature) FROM metrics m2 WHERE m2.time > now() AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1)
  8. ) AS interpolate
  9. FROM metrics m
  10. WHERE time > now () - INTERVAL '1 week'
  11. GROUP BY day, device_id
  12. ORDER BY day;
  13. day | device_id | value | interpolate
  14. ------------------------+-----------+-------+-------------
  15. 2019-01-10 01:00:00+01 | 1 | | 3.0
  16. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  17. 2019-01-12 01:00:00+01 | 1 | | 6.0
  18. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  19. 2019-01-14 01:00:00+01 | 1 | | 7.5
  20. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  21. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
  22. (7 row)

last()

The last aggregate allows you to get the value of one column as ordered by another. For example, last(temperature, time) will return the latest temperature value based on time within an aggregate group.

Required Arguments

NameDescription
valueThe value to return (anyelement)
timeThe timestamp to use for comparison (TIMESTAMP/TIMESTAMPTZ or integer type)

Sample Usage

Get the temperature every 5 minutes for each device over the past day:

  1. SELECT device_id, time_bucket('5 minutes', time) AS interval,
  2. last(temp, time)
  3. FROM metrics
  4. WHERE time > now () - INTERVAL '1 day'
  5. GROUP BY device_id, interval
  6. ORDER BY interval DESC;

WARNING:The last and first commands do not use indexes, and instead perform a sequential scan through their groups. They are primarily used for ordered selection within a GROUP BY aggregate, and not as an alternative to an ORDER BY time DESC LIMIT 1 clause to find the latest value (which will use indexes).


locf() Community

The locf function (last observation carried forward) allows you to carry the last seen value in an aggregation group forward. It can only be used in an aggregation query with time_bucket_gapfill. The locf function call cannot be nested inside other function calls.

Required Arguments

NameDescription
valueThe value to carry forward (anyelement)

Optional Arguments

NameDescription
prevThe lookup expression for values before gapfill start (anyelement)
treat_null_as_missingIgnore NULL values in locf and only carry non-NULL values forward

Because the locf function relies on having values before each bucketed period to carry forward, it might not have enough data to fill in a value for the first bucket if it does not contain a value. For example, the function would need to look before this first time bucket period, yet the query’s outer time predicate WHERE time > … normally restricts the function to only evaluate values within this time range. Thus, the prev expression tell the function how to look for values outside of the range specified by the time predicate. The prev expression will only be evaluated when no previous value is returned by the outer query (i.e., the first bucket in the queried time range is empty).

Sample Usage

Get the average temperature every day for each device over the last 7 days carrying forward the last value for missing readings:

  1. SELECT
  2. time_bucket_gapfill('1 day', time, now() - INTERVAL '1 week', now()) AS day,
  3. device_id,
  4. avg(temperature) AS value,
  5. locf(avg(temperature))
  6. FROM metrics
  7. WHERE time > now () - INTERVAL '1 week'
  8. GROUP BY day, device_id
  9. ORDER BY day;
  10. day | device_id | value | locf
  11. ------------------------+-----------+-------+------
  12. 2019-01-10 01:00:00+01 | 1 | |
  13. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  14. 2019-01-12 01:00:00+01 | 1 | | 5.0
  15. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  16. 2019-01-14 01:00:00+01 | 1 | | 7.0
  17. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  18. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
  19. (7 row)

Get the average temperature every day for each device over the last 7 days carrying forward the last value for missing readings with out-of-bounds lookup

  1. SELECT
  2. time_bucket_gapfill('1 day', time, now() - INTERVAL '1 week', now()) AS day,
  3. device_id,
  4. avg(temperature) AS value,
  5. locf(
  6. avg(temperature),
  7. (SELECT temperature FROM metrics m2 WHERE m2.time < now() - INTERVAL '2 week' AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1)
  8. )
  9. FROM metrics m
  10. WHERE time > now () - INTERVAL '1 week'
  11. GROUP BY day, device_id
  12. ORDER BY day;
  13. day | device_id | value | locf
  14. ------------------------+-----------+-------+------
  15. 2019-01-10 01:00:00+01 | 1 | | 1.0
  16. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  17. 2019-01-12 01:00:00+01 | 1 | | 5.0
  18. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  19. 2019-01-14 01:00:00+01 | 1 | | 7.0
  20. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  21. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
  22. (7 row)

time_bucket()

This is a more powerful version of the standard PostgreSQL date_trunc function. It allows for arbitrary time intervals instead of the second, minute, hour, etc. provided by date_trunc. The return value is the bucket’s start time. Below is necessary information for using it effectively.

TIP:TIMESTAMPTZ arguments are bucketed by the time at UTC. So the alignment of buckets is on UTC time. One consequence of this is that daily buckets are aligned to midnight UTC, not local time.

If the user wants buckets aligned by local time, the TIMESTAMPTZ input should be cast to TIMESTAMP (such a cast converts the value to local time) before being passed to time_bucket (see example below). Note that along daylight savings time boundaries the amount of data aggregated into a bucket after such a cast is irregular: for example if the bucket_width is 2 hours, the number of UTC hours bucketed by local time on daylight savings time boundaries can be either 3 hours or 1 hour.

Required Arguments

NameDescription
bucket_widthA PostgreSQL time interval for how long each bucket is (interval)
timeThe timestamp to bucket (timestamp/timestamptz/date)

Optional Arguments

NameDescription
offsetThe time interval to offset all buckets by (interval)
originBuckets are aligned relative to this timestamp (timestamp/timestamptz/date)

For Integer Time Inputs

Required Arguments

NameDescription
bucket_widthThe bucket width (integer)
timeThe timestamp to bucket (integer)

Optional Arguments

NameDescription
offsetThe amount to offset all buckets by (integer)

Sample Usage

Simple 5-minute averaging:

  1. SELECT time_bucket('5 minutes', time) AS five_min, avg(cpu)
  2. FROM metrics
  3. GROUP BY five_min
  4. ORDER BY five_min DESC LIMIT 10;

To report the middle of the bucket, instead of the left edge:

  1. SELECT time_bucket('5 minutes', time) + '2.5 minutes'
  2. AS five_min, avg(cpu)
  3. FROM metrics
  4. GROUP BY five_min
  5. ORDER BY five_min DESC LIMIT 10;

For rounding, move the alignment so that the middle of the bucket is at the 5 minute mark (and report the middle of the bucket):

  1. SELECT time_bucket('5 minutes', time, '-2.5 minutes') + '2.5 minutes'
  2. AS five_min, avg(cpu)
  3. FROM metrics
  4. GROUP BY five_min
  5. ORDER BY five_min DESC LIMIT 10;

To shift the alignment of the buckets you can use the origin parameter (passed as a timestamp, timestamptz, or date type). In this example, we shift the start of the week to a Sunday (the default is a Monday).

  1. SELECT time_bucket('1 week', timetz, TIMESTAMPTZ '2017-12-31')
  2. AS one_week, avg(cpu)
  3. FROM metrics
  4. GROUP BY one_week
  5. WHERE time > TIMESTAMPTZ '2017-12-01' AND time < TIMESTAMPTZ '2018-01-03'
  6. ORDER BY one_week DESC LIMIT 10;

The value of the origin parameter we used in this example was 2017-12-31, a Sunday within the period being analyzed. However, the origin provided to the function can be before, during, or after the data being analyzed. All buckets are calculated relative to this origin. So, in this example, any Sunday could have been used. Note that because time < TIMESTAMPTZ '2018-01-03' in this example, the last bucket would have only 4 days of data.

Bucketing a TIMESTAMPTZ at local time instead of UTC(see note above):

  1. SELECT time_bucket(INTERVAL '2 hours', timetz::TIMESTAMP)
  2. AS five_min, avg(cpu)
  3. FROM metrics
  4. GROUP BY five_min
  5. ORDER BY five_min DESC LIMIT 10;

Note that the above cast to TIMESTAMP converts the time to local time according to the server’s timezone setting.

WARNING:For users upgrading from a version before 1.0.0, please note that the default origin was moved from 2000-01-01 (Saturday) to 2000-01-03 (Monday) between versions 0.12.1 and 1.0.0. This change was made to make time_bucket compliant with the ISO standard for Monday as the start of a week. This should only affect multi-day calls to time_bucket. The old behavior can be reproduced by passing 2000-01-01 as the origin parameter to time_bucket.


time_bucket_gapfill() Community

The time_bucket_gapfill function works similar to time_bucket but also activates gap filling for the interval between start and finish. It can only be used with an aggregation query. Values outside of start and finish will pass through but no gap filling will be done outside of the specified range.

Starting with version 1.3.0, start and finish are optional arguments and will be inferred from the WHERE clause if not supplied as arguments.

TIP:We recommend using a WHERE clause whenever possible (instead of just start and finish arguments), as start and finish arguments will not filter input rows. Thus without a WHERE clause, this will lead TimescaleDB’s planner to select all data and not perform constraint exclusion to exclude chunks from further processing, which would be less performant.

The time_bucket_gapfill must be a top-level expression in a query or subquery, as shown in the above examples. You cannot, for example, do something like round(time_bucket_gapfill(...)) or cast the result of the gapfill call (unless as a subquery where the outer query does the type cast).

Required Arguments

NameDescription
bucket_widthA PostgreSQL time interval for how long each bucket is (interval)
timeThe timestamp to bucket (timestamp/timestamptz/date)

Optional Arguments

NameDescription
startThe start of the gapfill period (timestamp/timestamptz/date)
finishThe end of the gapfill period (timestamp/timestamptz/date)

Note that explicitly provided start and stop or derived from WHERE clause values need to be simple expressions. Such expressions should be evaluated to constants at the query planning. For example, simple expressions can contain constants or call to now(), but cannot reference to columns of a table.

For Integer Time Inputs

Required Arguments

NameDescription
bucket_widthinteger interval for how long each bucket is (int2/int4/int8)
timeThe timestamp to bucket (int2/int4/int8)

Optional Arguments

NameDescription
startThe start of the gapfill period (int2/int4/int8)
finishThe end of the gapfill period (int2/int4/int8)

Starting with version 1.3.0 start and finish are optional arguments and will be inferred from the WHERE clause if not supplied as arguments.

Sample Usage

Get the metric value every day over the last 7 days:

  1. SELECT
  2. time_bucket_gapfill('1 day', time) AS day,
  3. device_id,
  4. avg(value) AS value
  5. FROM metrics
  6. WHERE time > now() - INTERVAL '1 week' AND time < now()
  7. GROUP BY day, device_id
  8. ORDER BY day;
  9. day | device_id | value
  10. ------------------------+-----------+-------
  11. 2019-01-10 01:00:00+01 | 1 |
  12. 2019-01-11 01:00:00+01 | 1 | 5.0
  13. 2019-01-12 01:00:00+01 | 1 |
  14. 2019-01-13 01:00:00+01 | 1 | 7.0
  15. 2019-01-14 01:00:00+01 | 1 |
  16. 2019-01-15 01:00:00+01 | 1 | 8.0
  17. 2019-01-16 01:00:00+01 | 1 | 9.0
  18. (7 row)

Get the metric value every day over the last 7 days carrying forward the previous seen value if none is available in an interval:

  1. SELECT
  2. time_bucket_gapfill('1 day', time) AS day,
  3. device_id,
  4. avg(value) AS value,
  5. locf(avg(value))
  6. FROM metrics
  7. WHERE time > now() - INTERVAL '1 week' AND time < now()
  8. GROUP BY day, device_id
  9. ORDER BY day;
  10. day | device_id | value | locf
  11. ------------------------+-----------+-------+------
  12. 2019-01-10 01:00:00+01 | 1 | |
  13. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  14. 2019-01-12 01:00:00+01 | 1 | | 5.0
  15. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  16. 2019-01-14 01:00:00+01 | 1 | | 7.0
  17. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  18. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0

Get the metric value every day over the last 7 days interpolating missing values:

  1. SELECT
  2. time_bucket_gapfill('5 minutes', time) AS day,
  3. device_id,
  4. avg(value) AS value,
  5. interpolate(avg(value))
  6. FROM metrics
  7. WHERE time > now() - INTERVAL '1 week' AND time < now()
  8. GROUP BY day, device_id
  9. ORDER BY day;
  10. day | device_id | value | interpolate
  11. ------------------------+-----------+-------+-------------
  12. 2019-01-10 01:00:00+01 | 1 | |
  13. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  14. 2019-01-12 01:00:00+01 | 1 | | 6.0
  15. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  16. 2019-01-14 01:00:00+01 | 1 | | 7.5
  17. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  18. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0

Utilities/Statistics

timescaledb_information.hypertable

Get information about hypertables.

Available Columns

NameDescription
table_schemaSchema name of the hypertable.
table_nameTable name of the hypertable.
table_ownerOwner of the hypertable.
num_dimensionsNumber of dimensions.
num_chunksNumber of chunks.
table_sizeDisk space used by hypertable
index_sizeDisk space used by indexes
toast_sizeDisk space of toast tables
total_sizeTotal disk space used by the specified table, including all indexes and TOAST data

Sample Usage

Get information about all hypertables.

  1. SELECT * FROM timescaledb_information.hypertable;
  2. table_schema | table_name | table_owner | num_dimensions | num_chunks | table_size | index_size | toast_size | total_size
  3. --------------+------------+-------------+----------------+------------+------------+------------+------------+------------
  4. public | metrics | postgres | 1 | 5 | 99 MB | 96 MB | | 195 MB
  5. public | devices | postgres | 1 | 1 | 8192 bytes | 16 kB | | 24 kB
  6. (2 rows)

Check whether a table is a hypertable.

  1. SELECT * FROM timescaledb_information.hypertable
  2. WHERE table_schema='public' AND table_name='metrics';
  3. table_schema | table_name | table_owner | num_dimensions | num_chunks | table_size | index_size | toast_size | total_size
  4. --------------+------------+-------------+----------------+------------+------------+------------+------------+------------
  5. public | metrics | postgres | 1 | 5 | 99 MB | 96 MB | | 195 MB
  6. (1 row)

If you want to see the current interval length for your hypertables, you can check the _timescaledb_catalog as follows. Note that for time-based interval lengths, these are reported in microseconds.

  1. SELECT h.table_name, c.interval_length FROM _timescaledb_catalog.dimension c
  2. JOIN _timescaledb_catalog.hypertable h ON h.id = c.hypertable_id;
  3. table_name | interval_length
  4. ------------+-----------------
  5. metrics | 604800000000
  6. (1 row)

timescaledb_information.license

Get information about current license.

Available Columns

NameDescription
editionLicense key type (apache_only, community, enterprise)
expiredExpiration status of license key (bool)
expiration_timeTime of license key expiration

Sample Usage

Get information about current license.

  1. SELECT * FROM timescaledb_information.license;
  2. edition | expired | expiration_time
  3. ------------+---------+------------------------
  4. enterprise | f | 2019-02-15 13:44:53-05
  5. (1 row)

timescaledb_information.compressed_chunk_stats

Get statistics about chunk compression.

Available Columns

NameDescription
hypertable_name(REGCLASS) the name of the hypertable
chunk_name(REGCLASS) the name of the chunk
compression_status(TEXT) ‘Compressed’ or ‘Uncompressed’ depending on the status of the chunk
uncompressed_heap_bytes(TEXT) human-readable size of the heap before compression (NULL if currently uncompressed)
uncompressed_index_bytes(TEXT) human-readable size of all the indexes before compression (NULL if currently uncompressed)
uncompressed_toast_bytes(TEXT) human-readable size of the TOAST table before compression (NULL if currently uncompressed)
uncompressed_total_bytes(TEXT) human-readable size of the entire table (heap+indexes+toast) before compression (NULL if currently uncompressed)
compressed_heap_bytes(TEXT) human-readable size of the heap after compression (NULL if currently uncompressed)
compressed_index_bytes(TEXT) human-readable size of all the indexes after compression (NULL if currently uncompressed)
compressed_toast_bytes(TEXT) human-readable size of the TOAST table after compression (NULL if currently uncompressed)
compressed_total_bytes(TEXT) human-readable size of the entire table (heap+indexes+toast) after compression (NULL if currently uncompressed)

Sample Usage

  1. SELECT * FROM timescaledb_information.compressed_chunk_stats;
  2. -[ RECORD 1 ]------------+---------------------------------------
  3. hypertable_name | foo
  4. chunk_name | _timescaledb_internal._hyper_1_1_chunk
  5. compression_status | Uncompressed
  6. uncompressed_heap_bytes |
  7. uncompressed_index_bytes |
  8. uncompressed_toast_bytes |
  9. uncompressed_total_bytes |
  10. compressed_heap_bytes |
  11. compressed_index_bytes |
  12. compressed_toast_bytes |
  13. compressed_total_bytes |
  14. -[ RECORD 2 ]------------+---------------------------------------
  15. hypertable_name | foo
  16. chunk_name | _timescaledb_internal._hyper_1_2_chunk
  17. compression_status | Compressed
  18. uncompressed_heap_bytes | 8192 bytes
  19. uncompressed_index_bytes | 32 kB
  20. uncompressed_toast_bytes | 0 bytes
  21. uncompressed_total_bytes | 40 kB
  22. compressed_heap_bytes | 8192 bytes
  23. compressed_index_bytes | 32 kB
  24. compressed_toast_bytes | 8192 bytes
  25. compressed_total_bytes | 48 kB

timescaledb_information.compressed_hypertable_stats

Get statistics about hypertable compression.

Available Columns

NameDescription
hypertable_name(REGCLASS) the name of the hypertable
total_chunks(INTEGER) the number of chunks used by the hypertable
number_compressed_chunks(INTEGER) the number of chunks used by the hypertable that are currently compressed
uncompressed_heap_bytes(TEXT) human-readable size of the heap before compression (NULL if currently uncompressed)
uncompressed_index_bytes(TEXT) human-readable size of all the indexes before compression (NULL if currently uncompressed)
uncompressed_toast_bytes(TEXT) human-readable size of the TOAST table before compression (NULL if currently uncompressed)
uncompressed_total_bytes(TEXT) human-readable size of the entire table (heap+indexes+toast) before compression (NULL if currently uncompressed)
compressed_heap_bytes(TEXT) human-readable size of the heap after compression (NULL if currently uncompressed)
compressed_index_bytes(TEXT) human-readable size of all the indexes after compression (NULL if currently uncompressed)
compressed_toast_bytes(TEXT) human-readable size of the TOAST table after compression (NULL if currently uncompressed)
compressed_total_bytes(TEXT) human-readable size of the entire table (heap+indexes+toast) after compression (NULL if currently uncompressed)

Sample Usage

  1. SELECT * FROM timescaledb_information.compressed_hypertable_stats;
  2. -[ RECORD 1 ]------------+-----------
  3. hypertable_name | foo
  4. total_chunks | 4
  5. number_compressed_chunks | 1
  6. uncompressed_heap_bytes | 8192 bytes
  7. uncompressed_index_bytes | 32 kB
  8. uncompressed_toast_bytes | 0 bytes
  9. uncompressed_total_bytes | 40 kB
  10. compressed_heap_bytes | 8192 bytes
  11. compressed_index_bytes | 32 kB
  12. compressed_toast_bytes | 8192 bytes
  13. compressed_total_bytes | 48 kB

timescaledb_information.continuous_aggregates

Get metadata and settings information for continuous aggregates.

Available Columns

NameDescription
view_nameUser supplied name for continuous aggregate view
view_ownerOwner of the continuous aggregate view
refresh_lagAmount by which the materialization for the continuous aggregate lags behind the current time
refresh_intervalInterval between updates of the continuous aggregate materialization
max_interval_per_jobMaximum amount of data processed by a materialization job in a single run
ignore_invalidation_older_thanAge for which modified rows will not trigger update of the continuous aggregate
materialized_onlyReturn only materialized data when querying the continuous aggregate view.
materialization_hypertableName of the underlying materialization table
view_definitionSELECT query for continuous aggregate view

Sample Usage

  1. SELECT * FROM timescaledb_information.continuous_aggregates;
  2. -[ RECORD 1 ]------------------+-------------------------------------------------
  3. view_name | contagg_view
  4. view_owner | postgres
  5. refresh_lag | 02:00:00
  6. refresh_interval | 00:30:00
  7. max_interval_per_job | 20
  8. ignore_invalidation_older_than | 7 days
  9. materialized_only | f
  10. materialization_hypertable | _timescaledb_internal._materialized_hypertable_2
  11. view_definition | SELECT foo.a, +
  12. | COUNT(foo.b) AS countb +
  13. | FROM foo +
  14. | GROUP BY (time_bucket('1 day', foo.a)), foo.a;
  15. -- description of foo
  16. \d foo
  17. Table "public.foo"
  18. Column | Type | Collation | Nullable | Default
  19. --------+---------+-----------+----------+---------
  20. a | integer | | not null |
  21. b | integer | | |
  22. c | integer | | |

timescaledb_information.continuous_aggregate_stats

Get information about background jobs and statistics related to continuous aggregates.

Available Columns

NameDescription
view_nameUser supplied name for continuous aggregate.
completed_thresholdCompleted threshold for the last materialization job.
invalidation_thresholdInvalidation threshold set by the latest materialization job
last_run_started_atStart time of the last job
last_run_statusWhether the last run succeeded or failed
job_statusStatus of the materialization job . Valid values are ‘Running’ and ‘Scheduled’
last_run_durationTime taken by the last materialization job
next_scheduled_runStart time of the next materialization job
total_runsThe total number of runs of this job
total_successesThe total number of times this job succeeded
total_failuresThe total number of times this job failed
total_crashesThe total number of times this job crashed

Sample Usage

  1. SELECT * FROM timescaledb_information.continuous_aggregate_stats;
  2. -[ RECORD 1 ]----------+------------------------------
  3. view_name | contagg_view
  4. completed_threshold | 1
  5. invalidation_threshold | 1
  6. job_id | 1003
  7. last_run_started_at | 2019-07-03 15:00:26.016018-04
  8. last_run_status | Success
  9. job_status | scheduled
  10. last_run_duration | 00:00:00.039163
  11. next_scheduled_run | 2019-07-03 15:00:56.055181-04
  12. total_runs | 3
  13. total_successes | 3
  14. total_failures | 0
  15. total_crashes | 0

timescaledb_information.drop_chunks_policies

Shows information about drop_chunks policies that have been created by the user. (See add_drop_chunks_policy for more information about drop_chunks policies).

Available Columns

NameDescription
hypertable(REGCLASS) The name of the hypertable on which the policy is applied
older_than(INTERVAL) Chunks fully older than this amount of time will be dropped when the policy is run
cascade(BOOLEAN) Whether the policy will be run with the cascade option turned on, which will cause dependent objects to be dropped as well as chunks.
job_id(INTEGER) The id of the background job set up to implement the drop_chunks policy
schedule_interval(INTERVAL) The interval at which the job runs
max_runtime(INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries(INTEGER) The number of times the job will be retried should it fail
retry_period(INTERVAL) The amount of time the scheduler will wait between retries of the job on failure

Sample Usage

Get information about drop_chunks policies.

  1. SELECT * FROM timescaledb_information.drop_chunks_policies;
  2. hypertable | older_than | cascade | job_id | schedule_interval | max_runtime | max_retries | retry_period
  3. ------------------------+------------+---------+--------+-------------------+-------------+-------------+--------------
  4. conditions | @ 4 mons | t | 1001 | @ 1 sec | @ 5 mins | -1 | @ 12 hours
  5. (1 row)

timescaledb_information.reorder_policies

Shows information about reorder policies that have been created by the user. (See add_reorder_policy for more information about reorder policies).

Available Columns

NameDescription
hypertable(REGCLASS) The name of the hypertable on which the policy is applied
index(NAME) Chunks fully older than this amount of time will be dropped when the policy is run
job_id(INTEGER) The id of the background job set up to implement the reorder policy
schedule_interval(INTERVAL) The interval at which the job runs
max_runtime(INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries(INTEGER) The number of times the job will be retried should it fail
retry_period(INTERVAL) The amount of time the scheduler will wait between retries of the job on failure

Sample Usage

Get information about reorder policies.

  1. SELECT * FROM timescaledb_information.reorder_policies;
  2. hypertable | hypertable_index_name | job_id | schedule_interval | max_runtime | max_retries | retry_period
  3. --------------------+-----------------------------+--------+-------------------+-------------+-------------+--------------
  4. conditions | conditions_device_id_time_idx | 1000 | @ 4 days | @ 0 | -1 | @ 1 day
  5. (1 row)

timescaledb_information.policy_stats

Shows information and statistics about policies created to manage data retention and other administrative tasks on hypertables. (See policies). The statistics include information useful for administering jobs and determining whether they ought be rescheduled, such as: when and whether the background job used to implement the policy succeeded and when it is scheduled to run next.

Available Columns

NameDescription
hypertable(REGCLASS) The name of the hypertable on which the policy is applied
job_id(INTEGER) The id of the background job created to implement the policy
job_type(TEXT) The type of policy the job was created to implement
last_run_success(BOOLEAN) Whether the last run succeeded or failed
last_finish(TIMESTAMPTZ) The time the last run finished
last_start(TIMESTAMPTZ) The time the last run started
next_start(TIMESTAMPTZ) The time the next run will start
total_runs(INTEGER) The total number of runs of this job
total_failures(INTEGER) The total number of times this job failed

Sample Usage

Get information about statistics on created policies.

  1. SELECT * FROM timescaledb_information.policy_stats;
  2. hypertable | job_id | job_type | last_run_success | last_finish | last_start | next_start | total_runs | total_failures
  3. ------------------------+--------+-------------+------------------+------------------------------+------------------------------+------------------------------+------------+----------------
  4. conditions | 1001 | drop_chunks | t | Fri Dec 31 16:00:01 1999 PST | Fri Dec 31 16:00:01 1999 PST | Fri Dec 31 16:00:02 1999 PST | 2 | 0
  5. (1 row)

timescaledb.license_key

Sample Usage

View current license key.

  1. SHOW timescaledb.license_key;

chunk_relation_size()

Get relation size of the chunks of an hypertable.

Required Arguments

NameDescription
main_tableIdentifier of hypertable to get chunk relation sizes for.

Returns

ColumnDescription
chunk_idTimescaleDB id of a chunk
chunk_tableTable used for the chunk
partitioning_columnsPartitioning column names
partitioning_column_typesTypes of partitioning columns
partitioning_hash_functionsHash functions of partitioning columns
dimensionsPartitioning dimension names
rangesPartitioning ranges for each dimension
table_bytesDisk space used by main_table
index_bytesDisk space used by indexes
toast_bytesDisk space of toast tables
total_bytesDisk space used in total

Sample Usage

  1. SELECT * FROM chunk_relation_size('conditions');

or, to reduce the output, a common use is:

  1. SELECT chunk_table, table_bytes, index_bytes, total_bytes
  2. FROM chunk_relation_size('conditions');

The expected output:

  1. chunk_table | table_bytes | index_bytes | total_bytes
  2. ---------------------------------------------+-------------+-------------+-------------
  3. "_timescaledb_internal"."_hyper_1_1_chunk" | 29220864 | 37773312 | 67002368
  4. "_timescaledb_internal"."_hyper_1_2_chunk" | 59252736 | 81297408 | 140558336
  5. ...

Where chunk_table is the table that contains the data, table_bytes is the size of that table, index_bytes is the size of the indexes of the table, and total_bytes is the size of the table with indexes.


chunk_relation_size_pretty()

Get relation size of the chunks of an hypertable.

Required Arguments

NameDescription
main_tableIdentifier of hypertable to get chunk relation sizes for.

Returns

ColumnDescription
chunk_idTimescaleDB id of a chunk
chunk_tableTable used for the chunk
partitioning_columnsPartitioning column names
partitioning_column_typesTypes of partitioning columns
partitioning_hash_functionsHash functions of partitioning columns
rangesPartitioning ranges for each dimension
table_sizePretty output of table_bytes
index_sizePretty output of index_bytes
toast_sizePretty output of toast_bytes
total_sizePretty output of total_bytes

Sample Usage

  1. SELECT * FROM chunk_relation_size_pretty('conditions');

or, to reduce the output, a common use is:

  1. SELECT chunk_table, table_size, index_size, total_size
  2. FROM chunk_relation_size_pretty('conditions');

The expected output:

  1. chunk_table | table_size | index_size | total_size
  2. ---------------------------------------------+------------+------------+------------
  3. "_timescaledb_internal"."_hyper_1_1_chunk" | 28 MB | 36 MB | 64 MB
  4. "_timescaledb_internal"."_hyper_1_2_chunk" | 57 MB | 78 MB | 134 MB
  5. ...

Where chunk_table is the table that contains the data, table_size is the size of that table, index_size is the size of the indexes of the table, and total_size is the size of the table with indexes.


get_telemetry_report()

If background telemetry is enabled, returns the string sent to our servers. If telemetry is not enabled, outputs INFO message affirming telemetry is disabled and returns a NULL report.

Optional Arguments

NameDescription
always_display_reportSet to true to always view the report, even if telemetry is disabled

Sample Usage

If telemetry is enabled, view the telemetry report.

  1. SELECT get_telemetry_report();

If telemetry is disabled, view the telemetry report locally.

  1. SELECT get_telemetry_report(always_display_report := true);

hypertable_approximate_row_count()

Get approximate row count for hypertable(s) based on catalog estimates.

Optional Arguments

NameDescription
main_tableHypertable to get row count for. If omitted, all hypertabls are returned.

Sample Usage

Get the approximate row count for a single hypertable.

  1. SELECT * FROM hypertable_approximate_row_count('conditions');

Get the approximate row count for all hypertables.

  1. SELECT * FROM hypertable_approximate_row_count();

The expected output:

  1. schema_name | table_name | row_estimate
  2. -------------+------------+--------------
  3. public | conditions | 240000

hypertable_relation_size()

Get relation size of hypertable like pg_relation_size(hypertable).

Required Arguments

NameDescription
main_tableIdentifier of hypertable to get relation size for.

Returns

ColumnDescription
table_bytesDisk space used by main_table (like pg_relation_size(main_table))
index_bytesDisk space used by indexes
toast_bytesDisk space of toast tables
total_bytesTotal disk space used by the specified table, including all indexes and TOAST data

Sample Usage

  1. SELECT * FROM hypertable_relation_size('conditions');

or, to reduce the output, a common use is:

  1. SELECT table_bytes, index_bytes, toast_bytes, total_bytes
  2. FROM hypertable_relation_size('conditions');

The expected output:

  1. table_bytes | index_bytes | toast_bytes | total_bytes
  2. -------------+-------------+-------------+-------------
  3. 1227661312 | 1685979136 | 180224 | 2913820672

hypertable_relation_size_pretty()

Get relation size of hypertable like pg_relation_size(hypertable).

Required Arguments

NameDescription
main_tableIdentifier of hypertable to get relation size for.

Returns

ColumnDescription
table_sizePretty output of table_bytes
index_sizePretty output of index_bytes
toast_sizePretty output of toast_bytes
total_sizePretty output of total_bytes

Sample Usage

  1. SELECT * FROM hypertable_relation_size_pretty('conditions');

or, to reduce the output, a common use is:

  1. SELECT table_size, index_size, toast_size, total_size
  2. FROM hypertable_relation_size_pretty('conditions');

The expected output:

  1. table_size | index_size | toast_size | total_size
  2. ------------+------------+------------+------------
  3. 1171 MB | 1608 MB | 176 kB | 2779 MB

indexes_relation_size()

Get sizes of indexes on a hypertable.

Required Arguments

NameDescription
main_tableIdentifier of hypertable to get indexes size for.

Returns

ColumnDescription
index_nameIndex on hypertable
total_bytesSize of index on disk

Sample Usage

  1. SELECT * FROM indexes_relation_size('conditions');

The expected output:

  1. index_name | total_bytes
  2. --------------------------------------+-------------
  3. public.conditions_device_id_time_idx | 1198620672
  4. public.conditions_time_idx | 487358464

indexes_relation_size_pretty()

Get sizes of indexes on a hypertable.

Required Arguments

NameDescription
main_tableIdentifier of hypertable to get indexes size for.

Returns

ColumnDescription
index_nameIndex on hypertable
total_sizePretty output of total_bytes

Sample Usage

  1. SELECT * FROM indexes_relation_size_pretty('conditions');

The expected output:

  1. index_name_ | total_size
  2. --------------------------------------+------------
  3. public.conditions_device_id_time_idx | 1143 MB
  4. public.conditions_time_idx | 465 MB

show_tablespaces()

Show the tablespaces attached to a hypertable.

Required Arguments

NameDescription
hypertableIdentifier of hypertable to show attached tablespaces for.

Sample Usage

  1. SELECT * FROM show_tablespaces('conditions');
  2. show_tablespaces
  3. ------------------
  4. disk1
  5. disk2

timescaledb_pre_restore()

Perform the proper operations to allow restoring of the database via pg_restore to commence. Specifically this sets the timescaledb.restoring GUC to on and stops any background workers which may have been performing tasks until the timescaledb_post_restore fuction is run following the restore. See backup/restore docs for more information.

WARNING:After running SELECT timescaledb_pre_restore() you must run the timescaledb_post_restore function before using the database normally.

Sample Usage

  1. SELECT timescaledb_pre_restore();

timescaledb_post_restore()

Perform the proper operations after restoring the database has completed. Specifically this sets the timescaledb.restoring GUC to off and restarts any background workers. See backup/restore docs for more information.

Sample Usage

  1. SELECT timescaledb_post_restore();

Dump TimescaleDB meta data

To help when asking for support and reporting bugs, TimescaleDB includes a SQL script that outputs metadata from the internal TimescaleDB tables as well as version information. The script is available in the source distribution in scripts/ but can also be downloaded separately. To use it, run:

  1. psql [your connect flags] -d your_timescale_db < dump_meta_data.sql > dumpfile.txt

and then inspect dump_file.txt before sending it together with a bug report or support question.