Hardware sizing guidelines

Review configuration and hardware guidelines for InfluxDB OSS (open source) and InfluxDB Enterprise:

Disclaimer: Your numbers may vary from recommended guidelines. Guidelines provide estimated benchmarks for implementing the most performant system for your business.

Single node or cluster?

If your InfluxDB performance requires any of the following, a single node (InfluxDB OSS) may not support your needs:

We recommend InfluxDB Enterprise, which supports multiple data nodes (a cluster) across multiple server cores. InfluxDB Enterprise distributes multiple copies of your data across a cluster, providing high-availability and redundancy, so an unavailable node doesn’t significantly impact the cluster. Please contact us for assistance tuning your system.

If you want a single node instance of InfluxDB that’s fully open source, requires fewer writes, queries, and unique series than listed above, and do not require redundancy, we recommend InfluxDB OSS.

Note: Without the redundancy of a cluster, writes and queries fail immediately when a server is unavailable.

Query guidelines

Query complexity varies widely on system impact. Recommendations for both single nodes and clusters are based on moderate query loads.

For simple or complex queries, we recommend testing and adjusting the suggested requirements as needed. Query complexity is defined by the following criteria:

Query complexityCriteria
SimpleHave few or no functions and no regular expressions
Are bounded in time to a few minutes, hours, or 24 hours at most
Typically execute in a few milliseconds to a few dozen milliseconds
ModerateHave multiple functions and one or two regular expressions
May also have GROUP BY clauses or sample a time range of multiple weeks
Typically execute in a few hundred or a few thousand milliseconds
ComplexHave multiple aggregation or transformation functions or multiple regular expressions
May sample a very large time range of months or years
Typically take multiple seconds to execute

InfluxDB OSS guidelines

Run InfluxDB on locally attached solid state drives (SSDs). Other storage configurations have lower performance and may not be able to recover from small interruptions in normal processing.

Estimated guidelines include writes per second, queries per second, and number of unique series, CPU, RAM, and IOPS (input/output operations per second).

vCPU or CPURAMIOPSWrites per secondQueries* per secondUnique series
2-4 cores2-4 GB500< 5,000< 5< 100,000
4-6 cores8-32 GB500-1000< 250,000< 25< 1,000,000
8+ cores32+ GB1000+> 250,000> 25> 1,000,000
  • Queries per second for moderate queries. Queries vary widely in their impact on the system. For simple or complex queries, we recommend testing and adjusting the suggested requirements as needed. See query guidelines for details.

InfluxDB Enterprise cluster guidelines

Meta nodes

Set up clusters with an odd number of meta nodes──an even number may cause issues in certain configurations.

A cluster must have a minimum of three independent meta nodes for data redundancy and availability. A cluster with 2n + 1 meta nodes can tolerate the loss of n meta nodes.

Meta nodes do not need very much computing power. Regardless of the cluster load, we recommend the following guidelines for the meta nodes:

  • vCPU or CPU: 1-2 cores
  • RAM: 512 MB - 1 GB
  • IOPS: 50

Web node

The InfluxDB Enterprise web server is primarily an HTTP server with similar load requirements. For most applications, the server doesn’t need to be very robust. A cluster can function with only one web server, but for redundancy, we recommend connecting multiple web servers to a single back-end Postgres database.

Note: Production clusters should not use the SQLite database (lacks support for redundant web servers and handling high loads).

  • vCPU or CPU: 2-4 cores
  • RAM: 2-4 GB
  • IOPS: 100

Data nodes

A cluster with one data node is valid but has no data redundancy. Redundancy is set by the replication factor on the retention policy the data is written to. Where n is the replication factor, a cluster can lose n - 1 data nodes and return complete query results.

Note: For optimal data distribution within the cluster, use an even number of data nodes.

Guidelines vary by writes per second per node, moderate queries per second per node, and the number of unique series per node.

Guidelines per node

vCPU or CPURAMIOPSWrites per secondQueries* per secondUnique series
2 cores4-8 GB1000< 5,000< 5< 100,000
4-6 cores16-32 GB1000+< 100,000< 25< 1,000,000
8+ cores32+ GB1000+> 100,000> 25> 1,000,000
  • Guidelines are provided for moderate queries. Queries vary widely in their impact on the system. For simple or complex queries, we recommend testing and adjusting the suggested requirements as needed. See query guidelines for detail.

When do I need more RAM?

In general, more RAM helps queries return faster. Your RAM requirements are primarily determined by series cardinality. Higher cardinality requires more RAM. Regardless of RAM, a series cardinality of 10 million or more can cause OOM (out of memory) failures. You can usually resolve OOM issues by redesigning your schema.

Guidelines per cluster

InfluxDB Enterprise guidelines vary by writes and queries per second, series cardinality, replication factor, and infrastructure-AWS EC2 R4 instances or equivalent: - R4.xlarge (4 cores) - R4.2xlarge (8 cores) - R4.4xlarge (16 cores) - R4.8xlarge (32 cores)

Guidelines stem from a DevOps monitoring use case: maintaining a group of computers and monitoring server metrics (such as CPU, kernel, memory, disk space, disk I/O, network, and so on).

Recommended cluster configurations

Cluster configurations guidelines are organized by:

  • Series cardinality in your data set: 10,000, 100,000, 1,000,000, or 10,000,000
  • Number of data nodes
  • Number of server cores

For each cluster configuration, you’ll find guidelines for the following:

  • maximum writes per second only (no dashboard queries are running)
  • maximum queries per second only (no data is being written)
  • maximum simultaneous queries and writes per second, combined

Review cluster configuration tables

  1. Select the series cardinality tab below, and then click to expand a replication factor.
  2. In the Nodes x Core column, find the number of data nodes and server cores in your configuration, and then review the recommended maximum guidelines.

10,000 series 100,000 series 1,000,000 series 10,000,000 series

Select one of the following replication factors to see the recommended cluster configuration for 10,000 series:

Replication factor, 1

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
1 x 4188,00054 + 99,000
1 x 8405,00098 + 207,000
1 x 16673,0001514 + 375,000
1 x 321,056,0002422 + 650,000
2 x 4384,0001414 + 184,000
2 x 8746,0002222 + 334,000
2 x 161,511,0005640 + 878,000
2 x 322,426,0009668 + 1,746,000

Replication factor, 2

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
2 x 4296,0001616 + 151,000
2 x 8560,0003026 + 290,000
2 x 16972,0005450 + 456,000
2 x 321,860,0008474 + 881,000
4 x 81,781,00010064 + 682,000
4 x 163,430,000192104 + 1,732,000
4 x 326,351,000432188 + 3,283,000
6 x 82,923,000216138 + 1,049,000
6 x 165,650,000498246 + 2,246,000
6 x 329,842,0001248528 + 5,229,000
8 x 83,987,000632336 + 1,722,000
8 x 167,798,0001384544 + 3,911,000
8 x 3213,189,00036481,152 + 7,891,000

Replication factor, 3

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
3 x 8815,0006354 + 335,000
3 x 161,688,00012087 + 705,000
3 x 323,164,000255132 + 1,626,000
6 x 82,269,000252168 + 838,000
6 x 164,593,000624336 + 2,019,000
6 x 327,776,0001340576 + 3,624,000

Replication factor, 4

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
4 x 81,028,00011698 + 365,000
4 x 162,067,000208140 + 8,056,000
4 x 323,290,000428228 + 1,892,000
8 x 82,813,000928496 + 1,225,000
8 x 165,225,0002176800 + 2,799,000
8 x 328,555,00051841088 + 6,055,000

Replication factor, 6

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
6 x 81,261,000288192 + 522,000
6 x 162,370,000576288 + 1,275,000
6 x 323,601,0001056336 + 2,390,000

Replication factor, 8

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
8 x 81,382,0001184416 + 915,000
8 x 162,658,0002504448 + 2,204,000
8 x 323,887,0005184602 + 4,120,000

Select one of the following replication factors to see the recommended cluster configuration for 100,000 series:

Replication factor, 1

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
1 x 4143,00054 + 77,000
1 x 8322,00098 + 167,000
1 x 16624,0001712 + 337,000
1 x 321,114,0002618 + 657,000
2 x 4265,0001412 + 115,000
2 x 8573,0003022 + 269,000
2 x 161,261,0005238 + 679,000
2 x 322,335,0009066 + 1,510,000

Replication factor, 2

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
2 x 4196,0001614 + 77,000
2 x 8482,0003024 + 203,000
2 x 161,060,0006042 + 415,000
2 x 321,958,0009464 + 984,000
4 x 81,144,00010868 + 406,000
4 x 162,512,000228148 + 866,000
4 x 324,346,000564320 + 1,886,000
6 x 81,802,000252156 + 618,000
6 x 163,924,000562384 + 1,068,000
6 x 326,533,0001340912 + 2,083,000
8 x 82,516,000712360 + 1,020,000
8 x 165,478,00016321,024 + 1,843,000
8 x 321,0527,00033921,792 + 4,998,000

Replication factor, 3

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
3 x 8616,0007251 + 218,000
3 x 161,268,00011784 + 438,000
3 x 322,260,000189114 + 984,000
6 x 81,393,000294192 + 421,000
6 x 163,056,000726456 + 893,000
6 x 325,017,0001584798 + 1,098,000

Replication factor, 4

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
4 x 8635,00011280 + 207,000
4 x 161,359,000188124 + 461,000
4 x 322,320,000416192 + 1,102,000
8 x 81,570,0001360816 + 572,000
8 x 163,205,0002720832 + 2,053,000
8 x 323,294,0002592804 + 2,174,000

Replication factor, 6

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
6 x 8694,000302198 + 204,000
6 x 161,397,000552360 + 450,000
6 x 322,298,0001248384 + 1,261,000

Replication factor, 8

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
8 x 8739,0001296480 + 371,000
8 x 161,396,0002592672 + 843,000
8 x 322,614,0002496960 + 1,371,000

Select one of the following replication factors to see the recommended cluster configuration for 1,000,000 series:

Replication factor, 2

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
2 x 4104,0001812 + 54,000
2 x 8195,0003624 + 99,000
2 x 16498,0007044 + 145,000
2 x 321,195,00010284 + 232,000
4 x 8488,00012056 + 222,000
4 x 161,023,000244112 + 428,000
4 x 322,686,000468208 + 729,000
6 x 8845,000270126 + 356,000
6 x 161,780,000606288 + 663,000
6 x 32430,0001,488624 + 1,209,000
8 x 81,831,000808296 + 778,000
8 x 164,167,0001,856640 + 2,031,000
8 x 327,813,0003,201896 + 4,897,000

Replication factor, 3

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
3 x 8234,0007242 + 87,000
3 x 16613,00012075 + 166,000
3 x 321,365,000141114 + 984,000
6 x 8593,000318144 + 288,000
6 x 161,545,000744384 + 407,000
6 x 323,204,0001632912 + 505,000

Replication factor, 4

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
4 x 8258,00011668 + 73,000
4 x 16675,000196132 + 140,000
4 x 321,513,000244176 + 476,000
8 x 8614,0001096400 + 258,000
8 x 161,557,00024961152 + 436,000
8 x 323,265,00042882240 + 820,000

Replication factor, 6

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
6 x 8694,000302198 + 204,000
6 x 161,397,000552360 + 450,000
6 x 322,298,0001248384 + 1,261,000

Replication factor, 8

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
8 x 8739,0001296480 + 371,000
8 x 161,396,0002592672 + 843,000
8 x 322,614,0002496960 + 1,371,000

Select one of the following replication factors to see the recommended cluster configuration for 10,000,000 series:

Replication factor, 1

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
2 x 4122,0001612 + 81,000
2 x 8259,0003624 + 143,000
2 x 16501,0006644 + 290,000
2 x 32646,00014254 + 400,000

Replication factor, 2

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
2 x 487,0001814 + 56,000
2 x 8169,0003824 + 98,000
2 x 16334,0007646 + 224,000
2 x 32534,00013658 + 388,000
4 x 8335,00012060 + 204,000
4 x 16643,000256112 + 395,000
4 x 32967,000560158 + 806,000
6 x 8521,000378144 + 319,000
6 x 16890,000582186 + 513,000
8 x 8699,0001,032256 + 477,000
8 x 161,345,0002,048544 + 741,000

Replication factor, 3

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
3 x 8170,0006042 + 98,000
3 x 16333,00012976 + 206,000
3 x 32609,00017860 + 162,000
6 x 8395,000402132 + 247,000
6 x 16679,000894150 + 527,000

Replication factor, 4

Nodes x CoreWrites per secondQueries per secondQueries + writes per second
4 x 818336513252 + 100,000

Storage: type, amount, and configuration

Storage volume and IOPS

Consider the type of storage you need and the amount. InfluxDB is designed to run on solid state drives (SSDs) and memory-optimized cloud instances, for example, AWS EC2 R5 or R4 instances. InfluxDB isn’t tested on hard disk drives (HDDs) and we don’t recommend HDDs for production. For best results, InfluxDB servers must have a minimum of 1000 IOPS on storage to ensure recovery and availability. We recommend at least 2000 IOPS for rapid recovery of cluster data nodes after downtime.

See your cloud provider documentation for IOPS detail on your storage volumes.

Bytes and compression

Database names, measurements, tag keys, field keys, and tag values are stored only once and always as strings. Field values and timestamps are stored for every point.

Non-string values require approximately three bytes. String values require variable space, determined by string compression.

Separate wal and data directories

When running InfluxDB in a production environment, store the wal directory and the data directory on separate storage devices. This optimization significantly reduces disk contention under heavy write load──an important consideration if the write load is highly variable. If the write load does not vary by more than 15%, the optimization is probably not necessary.