Perform advanced analytic queries

You can use TimescaleDB for a variety of analytical queries. Some of these queries are native PostgreSQL, and some are additional functions provided by TimescaleDB. This section contains the most common and useful analytic queries.

Calculate the median and percentile

Use percentile_cont to calculate percentiles. You can also use this function to look for the fiftieth percentile, or median. For example, to find the median temperature:

  1. SELECT percentile_cont(0.5)
  2. WITHIN GROUP (ORDER BY temperature)
  3. FROM conditions;

You can also use Timescale Toolkit to find the approximate percentile.

Calculate the cumulative sum

Use sum(sum(column)) OVER(ORDER BY group) to find the cumulative sum. For example:

  1. SELECT location, sum(sum(temperature)) OVER(ORDER BY location)
  2. FROM conditions
  3. GROUP BY location;

Calculate the moving average

For a simple moving average, use the OVER windowing function over a number of rows, then compute an aggregation function over those rows. For example, to find the smoothed temperature of a device by averaging the ten most recent readings:

  1. SELECT time, AVG(temperature) OVER(ORDER BY time
  2. ROWS BETWEEN 9 PRECEDING AND CURRENT ROW)
  3. AS smooth_temp
  4. FROM conditions
  5. WHERE location = 'garage' and time > NOW() - INTERVAL '1 day'
  6. ORDER BY time DESC;

Calculate the increase in a value

To calculate the increase in a value, you need to account for counter resets. Counter resets can occur if a host reboots or container restarts. This example finds the number of bytes sent, and takes counter resets into account:

  1. SELECT
  2. time,
  3. (
  4. CASE
  5. WHEN bytes_sent >= lag(bytes_sent) OVER w
  6. THEN bytes_sent - lag(bytes_sent) OVER w
  7. WHEN lag(bytes_sent) OVER w IS NULL THEN NULL
  8. ELSE bytes_sent
  9. END
  10. ) AS "bytes"
  11. FROM net
  12. WHERE interface = 'eth0' AND time > NOW() - INTERVAL '1 day'
  13. WINDOW w AS (ORDER BY time)
  14. ORDER BY time

Calculate the rate of change

Like increase, rate applies to a situation with monotonically increasing counters. If your sample interval is variable or you use different sampling intervals between different series it is helpful to normalize the values to a common time interval to make the calculated values comparable. This example finds bytes per second sent, and takes counter resets into account:

  1. SELECT
  2. time,
  3. (
  4. CASE
  5. WHEN bytes_sent >= lag(bytes_sent) OVER w
  6. THEN bytes_sent - lag(bytes_sent) OVER w
  7. WHEN lag(bytes_sent) OVER w IS NULL THEN NULL
  8. ELSE bytes_sent
  9. END
  10. ) / extract(epoch from time - lag(time) OVER w) AS "bytes_per_second"
  11. FROM net
  12. WHERE interface = 'eth0' AND time > NOW() - INTERVAL '1 day'
  13. WINDOW w AS (ORDER BY time)
  14. ORDER BY time

Calculate the delta

In many monitoring and IoT use cases, devices or sensors report metrics that do not change frequently, and any changes are considered anomalies. When you query for these changes in values over time, you usually do not want to transmit all the values, but only the values where changes were observed. This helps to minimize the amount of data sent. You can use a combination of window functions and subselects to achieve this. This example uses diffs to filter rows where values have not changed and only transmits rows where values have changed:

  1. SELECT time, value FROM (
  2. SELECT time,
  3. value,
  4. value - LAG(value) OVER (ORDER BY time) AS diff
  5. FROM hypertable) ht
  6. WHERE diff IS NULL OR diff != 0;

Calculate the change in a metric within a group

To group your data by some field, and calculate the change in a metric within each group, use LAG ... OVER (PARTITION BY ...). For example, given some weather data, calculate the change in temperature for each city:

  1. SELECT ts, city_name, temp_delta
  2. FROM (
  3. SELECT
  4. ts,
  5. city_name,
  6. avg_temp - LAG(avg_temp) OVER (PARTITION BY city_name ORDER BY ts) as temp_delta
  7. FROM weather_metrics_daily
  8. ) AS temp_change
  9. WHERE temp_delta IS NOT NULL
  10. ORDER BY bucket;

Group data into time buckets

The TimescaleDB time_bucket function extends the PostgreSQL date_trunc function. Time bucket accepts arbitrary time intervals as well as optional offsets and returns the bucket start time. For example:

  1. SELECT time_bucket('5 minutes', time) AS five_min, avg(cpu)
  2. FROM metrics
  3. GROUP BY five_min
  4. ORDER BY five_min DESC LIMIT 12;

Get the first or last value in a column

The TimescaleDB first and last functions allow you to get the value of one column as ordered by another. This is commonly used in an aggregation. These examples find the last element of a group:

  1. SELECT location, last(temperature, time)
  2. FROM conditions
  3. GROUP BY location;
  4. SELECT time_bucket('5 minutes', time) five_min, location, last(temperature, time)
  5. FROM conditions
  6. GROUP BY five_min, location
  7. ORDER BY five_min DESC LIMIT 12;

Generate a histogram

The TimescaleDB histogram function allows you to generate a histogram of your data. This example defines a histogram with five buckets defined over the range 60 to 85. The generated histogram has seven bins; the first is for values below the minimum threshold of 60, the middle five bins are for values in the stated range and the last is for values above 85:

  1. SELECT location, COUNT(*),
  2. histogram(temperature, 60.0, 85.0, 5)
  3. FROM conditions
  4. WHERE time > NOW() - INTERVAL '7 days'
  5. GROUP BY location;

This query outputs data like this:

  1. location | count | histogram
  2. ------------+-------+-------------------------
  3. office | 10080 | {0,0,3860,6220,0,0,0}
  4. basement | 10080 | {0,6056,4024,0,0,0,0}
  5. garage | 10080 | {0,2679,957,2420,2150,1874,0}

Fill gaps in time-series data

You can display records for a selected time range, even if no data exists for part of the range. This is often called gap filling, and usually involves an operation to record a null value for any missing data.

In this example, we use trading data that includes a time timestamp, the asset_code being traded, the price of the asset, and the volume of the asset being traded.

Create a query for the volume of the asset ‘TIMS’ being traded every day for the month of September:

  1. SELECT
  2. time_bucket('1 day', time) AS date,
  3. sum(volume) AS volume
  4. FROM trades
  5. WHERE asset_code = 'TIMS'
  6. AND time >= '2021-09-01' AND time < '2021-10-01'
  7. GROUP BY date
  8. ORDER BY date DESC;

This query outputs data like this:

  1. date | volume
  2. ------------------------+--------
  3. 2021-09-29 00:00:00+00 | 11315
  4. 2021-09-28 00:00:00+00 | 8216
  5. 2021-09-27 00:00:00+00 | 5591
  6. 2021-09-26 00:00:00+00 | 9182
  7. 2021-09-25 00:00:00+00 | 14359
  8. 2021-09-22 00:00:00+00 | 9855

You can see from the output that no records are included for 09-23, 09-24, or 09-30, because no trade data was recorded for those days. To include time records for each missing day, you can use the TimescaleDB time_bucket_gapfill function, which generates a series of time buckets according to a given interval across a time range. In this example, the interval is one day, across the month of September:

  1. SELECT
  2. time_bucket_gapfill('1 day', time) AS date,
  3. sum(volume) AS volume
  4. FROM trades
  5. WHERE asset_code = 'TIMS'
  6. AND time >= '2021-09-01' AND time < '2021-10-01'
  7. GROUP BY date
  8. ORDER BY date DESC;

This query outputs data like this:

  1. date | volume
  2. ------------------------+--------
  3. 2021-09-30 00:00:00+00 |
  4. 2021-09-29 00:00:00+00 | 11315
  5. 2021-09-28 00:00:00+00 | 8216
  6. 2021-09-27 00:00:00+00 | 5591
  7. 2021-09-26 00:00:00+00 | 9182
  8. 2021-09-25 00:00:00+00 | 14359
  9. 2021-09-24 00:00:00+00 |
  10. 2021-09-23 00:00:00+00 |
  11. 2021-09-22 00:00:00+00 | 9855

You can also use the TimescaleDB time_bucket_gapfill function to generate data points that also include timestamps. This can be useful for graphic libraries that require even null values to have a timestamp so that they can accurately draw gaps in a graph. In this example, we generate 1080 data points across the last two weeks, fill in the gaps with null values, and give each null value a timestamp:

  1. SELECT
  2. time_bucket_gapfill(INTERVAL '2 weeks' / 1080, time, now() - INTERVAL '2 weeks', now()) AS btime,
  3. sum(volume) AS volume
  4. FROM trades
  5. WHERE asset_code = 'TIMS'
  6. AND time >= now() - INTERVAL '2 weeks' AND time < now()
  7. GROUP BY btime
  8. ORDER BY btime;

This query outputs data like this:

  1. btime | volume
  2. ------------------------+----------
  3. 2021-03-09 17:28:00+00 | 1085.25
  4. 2021-03-09 17:46:40+00 | 1020.42
  5. 2021-03-09 18:05:20+00 |
  6. 2021-03-09 18:24:00+00 | 1031.25
  7. 2021-03-09 18:42:40+00 | 1049.09
  8. 2021-03-09 19:01:20+00 | 1083.80
  9. 2021-03-09 19:20:00+00 | 1092.66
  10. 2021-03-09 19:38:40+00 |
  11. 2021-03-09 19:57:20+00 | 1048.42
  12. 2021-03-09 20:16:00+00 | 1063.17
  13. 2021-03-09 20:34:40+00 | 1054.10
  14. 2021-03-09 20:53:20+00 | 1037.78

Fill gaps by carrying the last observation forward

If your data collections only record rows when the actual value changes, your visualizations might still need all data points to properly display your results. In this situation, you can carry forward the last observed value to fill the gap. For example:

  1. SELECT
  2. time_bucket_gapfill(INTERVAL '5 min', time, now() - INTERVAL '2 weeks', now()) as 5min,
  3. meter_id,
  4. locf(avg(data_value)) AS data_value
  5. FROM my_hypertable
  6. WHERE
  7. time > now() - INTERVAL '2 weeks'
  8. AND meter_id IN (1,2,3,4)
  9. GROUP BY 5min, meter_id

Find the last point for each unique item

You can find the last point for each unique item in your database. For example, the last recorded measurement from each IoT device, the last location of each item in asset tracking, or the last price of a security. The standard approach to minimize the amount of data to be searched for the last point is to use a time predicate to tightly bound the amount of time, or the number of chunks, to traverse. This method does not work unless all items have at least one record within the time range. A more robust method is to use a last point query to determine the last record for each unique item.

In this example, useful for asset tracking or fleet management, we create a metadata table for each vehicle being tracked, and a second time-series table containing the vehicle’s location at a given time:

  1. CREATE TABLE vehicles (
  2. vehicle_id INTEGER PRIMARY KEY,
  3. vin_number CHAR(17),
  4. last_checkup TIMESTAMP
  5. );
  6. CREATE TABLE location (
  7. time TIMESTAMP NOT NULL,
  8. vehicle_id INTEGER REFERENCES vehicles (vehicle_id),
  9. latitude FLOAT,
  10. longitude FLOAT
  11. );
  12. SELECT create_hypertable('location', 'time');

We can use the first table, which gives us a distinct set of vehicles, to perform a LATERAL JOIN against the location table:

  1. SELECT data.* FROM vehicles v
  2. INNER JOIN LATERAL (
  3. SELECT * FROM location l
  4. WHERE l.vehicle_id = v.vehicle_id
  5. ORDER BY time DESC LIMIT 1
  6. ) AS data
  7. ON true
  8. ORDER BY v.vehicle_id, data.time DESC;
  9. time | vehicle_id | latitude | longitude
  10. ----------------------------+------------+-----------+-------------
  11. 2017-12-19 20:58:20.071784 | 72 | 40.753690 | -73.980340
  12. 2017-12-20 11:19:30.837041 | 156 | 40.729265 | -73.993611
  13. 2017-12-15 18:54:01.185027 | 231 | 40.350437 | -74.651954

This approach requires keeping a separate table of distinct item identifiers or names. You can do this by using a foreign key from the hypertable to the metadata table, as shown in the REFERENCES definition in the example.

The metadata table can be populated through business logic, for example when a vehicle is first registered with the system. Alternatively, you can dynamically populate it using a trigger when inserts or updates are performed against the hypertable. For example:

  1. CREATE OR REPLACE FUNCTION create_vehicle_trigger_fn()
  2. RETURNS TRIGGER LANGUAGE PLPGSQL AS
  3. $BODY$
  4. BEGIN
  5. INSERT INTO vehicles VALUES(NEW.vehicle_id, NULL, NULL) ON CONFLICT DO NOTHING;
  6. RETURN NEW;
  7. END
  8. $BODY$;
  9. CREATE TRIGGER create_vehicle_trigger
  10. BEFORE INSERT OR UPDATE ON location
  11. FOR EACH ROW EXECUTE PROCEDURE create_vehicle_trigger_fn();

You could also implement this functionality without a separate metadata table by performing a loose index scan over the location hypertable, although this requires more compute resources. Alternatively, you speed up your SELECT DISTINCT queries by structuring them so that TimescaleDB can use its SkipScan feature.