Resolve high series cardinality

If reads and writes to InfluxDB have started to slow down, high series cardinality (too many series) may be causing memory issues.

Take steps to understand and resolve high series cardinality.

  1. Learn the causes of high cardinality
  2. Measure series cardinality
  3. Resolve high cardinality

Learn the causes of high series cardinality

IndexDB indexes the following data elements to speed up reads:

Each unique set of indexed data elements forms a series key. Tags containing highly variable information like unique IDs, hashes, and random strings lead to a large number of series, also known as high series cardinality. High series cardinality is a primary driver of high memory usage for many database workloads.

Measure series cardinality

Use the following to measure series cardinality of your buckets:

Resolve high cardinality

To resolve high series cardinality, complete the following steps (for multiple buckets if applicable):

  1. Review tags.
  2. Improve your schema.
  3. Delete high cardinality data.

Review tags

Review your tags to ensure each tag does not contain unique values for most entries:

Common tag issues

Look for the following common issues, which often cause many unique tag values:

  • Writing log messages to tags. If a log message includes a unique timestamp, pointer value, or unique string, many unique tag values are created.
  • Writing timestamps to tags. Typically done by accident in client code.
  • Unique tag values that grow over time For example, a user ID tag may work at a small startup, but may begin to cause issues when the company grows to hundreds of thousands of users.

Count unique tag values

The following example Flux query shows you which tags are contributing the most to cardinality. Look for tags with values orders of magnitude higher than others.

  1. // Count unique values for each tag in a bucket
  2. import "influxdata/influxdb/schema"
  3. cardinalityByTag = (bucket) => schema.tagKeys(bucket: bucket)
  4. |> map(
  5. fn: (r) => ({
  6. tag: r._value,
  7. _value: if contains(set: ["_stop", "_start"], value: r._value) then
  8. 0
  9. else
  10. (schema.tagValues(bucket: bucket, tag: r._value)
  11. |> count()
  12. |> findRecord(fn: (key) => true, idx: 0))._value,
  13. }),
  14. )
  15. |> group(columns: ["tag"])
  16. |> sum()
  17. cardinalityByTag(bucket: "example-bucket")

If you’re experiencing runaway cardinality, the query above may timeout. If you experience a timeout, run the queries below—one at a time.

  1. Generate a list of tags:

    1. // Generate a list of tags
    2. import "influxdata/influxdb/schema"
    3. schema.tagKeys(bucket: "example-bucket")
  2. Count unique tag values for each tag:

    1. // Run the following for each tag to count the number of unique tag values
    2. import "influxdata/influxdb/schema"
    3. tag = "example-tag-key"
    4. schema.tagValues(bucket: "my-bucket", tag: tag)
    5. |> count()

These queries should help identify the sources of high cardinality in each of your buckets. To determine which specific tags are growing, check the cardinality again after 24 hours to see if one or more tags have grown significantly.

Improve your schema

To minimize cardinality in the future, design your schema for easy and performant querying. Review best practices for schema design.

Delete data to reduce high cardinality

Consider whether you need the data that is causing high cardinality. If you no longer need this data, you can delete the whole bucket or delete a range of data.