InfluxDB schema design

Design your schema for simpler and more performant queries. Follow design guidelines to make your schema easy to query. Learn how these guidelines lead to more performant queries.

Good schema design can prevent high series cardinality, resulting in better performing queries. If you notice data reads and writes slowing down or want to learn how cardinality affects performance, see how to resolve high cardinality.

Design to query

The schemas below demonstrate measurements, tag keys, and field keys that are easy to query.

measurementtag keytag keyfield keyfield key
airSensorsensorIdstationhumiditytemperature
waterQualitySensorsensorIdstationpHtemperature

The airSensor and waterQualitySensor schemas illustrate the following guidelines:

The following points (formatted as line protocol) use the airSensor and waterQualitySensor schemas:

  1. airSensor,sensorId=A0100,station=Harbor humidity=35.0658,temperature=21.667 1636729543000000000
  2. waterQualitySensor,sensorId=W0101,station=Harbor pH=6.1,temperature=16.103 1472515200000000000

Keep measurements and keys simple

Store data in tag values or field values, not in tag keys, field keys, or measurements. If you design your schema to store data in tag and field values, your queries will be easier to write and more efficient.

In addition, you’ll keep cardinality low by not creating measurements and keys as you write data. To learn more about the performance impact of high series cardinality, see how to resolve high cardinality.

Compare schemas

Compare the following valid schemas represented by line protocol.

Recommended: the following schema stores metadata in separate crop, plot, and region tags. The temp field contains variable numeric data.

  1. Good Measurements schema - Data encoded in tags (recommended)
  2. -------------
  3. weather_sensor,crop=blueberries,plot=1,region=north temp=50.1 1472515200000000000
  4. weather_sensor,crop=blueberries,plot=2,region=midwest temp=49.8 1472515200000000000

Not recommended: the following schema stores multiple attributes (crop, plot and region) concatenated (blueberries.plot-1.north) within the measurement, similar to Graphite metrics.

  1. Bad Measurements schema - Data encoded in the measurement (not recommended)
  2. -------------
  3. blueberries.plot-1.north temp=50.1 1472515200000000000
  4. blueberries.plot-2.midwest temp=49.8 1472515200000000000

Not recommended: the following schema stores multiple attributes (crop, plot and region) concatenated (blueberries.plot-1.north) within the field key.

  1. Bad Keys schema - Data encoded in field keys (not recommended)
  2. -------------
  3. weather_sensor blueberries.plot-1.north.temp=50.1 1472515200000000000
  4. weather_sensor blueberries.plot-2.midwest.temp=49.8 1472515200000000000

Compare queries

Compare the following queries of the Good Measurements and Bad Measurements schemas. The Flux queries calculate the average temp for blueberries in the north region

Easy to query: Good Measurements data is easily filtered by region tag values, as in the following example.

  1. // Query *Good Measurements*, data stored in separate tags (recommended)
  2. from(bucket:"example-bucket")
  3. |> range(start:2016-08-30T00:00:00Z)
  4. |> filter(fn: (r) => r._measurement == "weather_sensor" and r.region == "north" and r._field == "temp")
  5. |> mean()

Difficult to query: Bad Measurements requires regular expressions to extract plot and region from the measurement, as in the following example.

  1. // Query *Bad Measurements*, data encoded in the measurement (not recommended)
  2. from(bucket:"example-bucket")
  3. |> range(start:2016-08-30T00:00:00Z)
  4. |> filter(fn: (r) => r._measurement =~ /\.north$/ and r._field == "temp")
  5. |> mean()

Complex measurements make some queries impossible. For example, calculating the average temperature of both plots is not possible with the Bad Measurements schema.

Keep keys simple

In addition to keeping your keys free of data, follow these additional guidelines to make them easier to query:

Avoid keywords and special characters in keys

To simplify query writing, don’t include reserved keywords or special characters in tag and field keys. If you use Flux keywords in keys, then you’ll have to wrap the keys in double quotes. If you use non-alphanumeric characters in keys, then you’ll have to use bracket notation in [Flux]((/flux/v0.x/).

Avoid duplicate names for tags and fields

Avoid using the same name for a tag key and a field key within the same schema. Your query results may be unpredictable if you have a tag and a field with the same name.

Use tags and fields

Tag values are indexed and field values aren’t. This means that querying tags is more performant than querying fields. Your queries should guide what you store in tags and what you store in fields.

Use fields for unique and numeric data

  • Store unique or frequently changing values as field values.
  • Store numeric values as field values. (Tags only store strings).

Use tags to improve query performance

  • Store values as tag values if they can be reasonably indexed.
  • Store values as tag values if the values are used in filter() or group() functions.
  • Store values as tag values if the values are shared across multiple data points, i.e. metadata about the field.

Because InfluxDB indexes tags, the query engine doesn’t need to scan every record in a bucket to locate a tag value. For example, consider a bucket that stores data about thousands of users. With userId stored in a field, a query for user abcde requires InfluxDB to scan userId in every row.

  1. from(bucket: "example-bucket")
  2. |> range(start: -7d)
  3. |> filter(fn: (r) => r._field == "userId" and r._value == "abcde")

To retrieve data more quickly, filter on a tag to reduce the number of rows scanned. The tag should store data that can be reasonably indexed. The following query filters by the company tag to reduce the number of rows scanned for userId.

  1. from(bucket: "example-bucket")
  2. |> range(start: -7d)
  3. |> filter(fn: (r) => r.company == "Acme")
  4. |> filter(fn: (r) => r._field == "userId" and r._value == "abcde")

Keep tags simple

Use one tag for each data attribute. If your source data contains multiple data attributes in a single parameter, split each attribute into its own tag. When each tag represents one attribute (not multiple concatenated attributes) of your data, you’ll reduce the need for regular expressions in your queries. Without regular expressions, your queries will be easier to write and more performant.

Compare schemas

Compare the following valid schemas represented by line protocol.

Recommended: the following schema splits location data into plot and region tags.

  1. Good Tags schema - Data encoded in multiple tags
  2. -------------
  3. weather_sensor,crop=blueberries,plot=1,region=north temp=50.1 1472515200000000000
  4. weather_sensor,crop=blueberries,plot=2,region=midwest temp=49.8 1472515200000000000

Not recommended: the following schema stores multiple attributes (plot and region) concatenated within the location tag value (plot-1.north).

  1. Bad Tags schema - Multiple data encoded in a single tag
  2. -------------
  3. weather_sensor,crop=blueberries,location=plot-1.north temp=50.1 1472515200000000000
  4. weather_sensor,crop=blueberries,location=plot-2.midwest temp=49.8 1472515200000000000

Compare queries

Compare queries of the Good Tags and Bad Tags schemas. The Flux queries calculate the average temp for blueberries in the north region.

Easy to query: Good Tags data is easily filtered by region tag values, as in the following example.

  1. // Query *Good Tags* schema, data encoded in multiple tags
  2. from(bucket:"example-bucket")
  3. |> range(start:2016-08-30T00:00:00Z)
  4. |> filter(fn: (r) => r._measurement == "weather_sensor" and r.region == "north" and r._field == "temp")
  5. |> mean()

Difficult to query: Bad Tags requires regular expressions to parse the complex location values, as in the following example.

  1. // Query *Bad Tags* schema, multiple data encoded in a single tag
  2. from(bucket:"example-bucket")
  3. |> range(start:2016-08-30T00:00:00Z)
  4. |> filter(fn: (r) => r._measurement == "weather_sensor" and r.location =~ /\.north$/ and r._field == "temp")
  5. |> mean()

For an overview of the InfluxDB data model, watch the following video: