These benchmarks measure RocksDB performance when data resides on flash storage. (The benchmarks on this page were generated in June 2020 with RocksDB 6.10.0 unless otherwise noted)


All of the benchmarks are run on the same AWS instance. Here are the details of the test setup:

  • Instance type: m5d.2xlarge 8 CPU, 32 GB Memory, 1 x 300 NVMe SSD.
  • Kernel version: Linux 4.14.177-139.253.amzn2.x86_64
  • File System: XFS with discard enabled

To understand the performance of the SSD card, we ran an fio test and observed 117K IOPS of 4KB reads (See Performance Benchmarks#fio test results for outputs).

All tests were executed against by executing with the following parameters (unless otherwise specified): NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 For long-running tests, the tests were executed with a duration of 5400 seconds (DURATION=5400)

All other parameters used the default values, unless explicitly mentioned here. Tests were executed sequentially against the same database instance. The db_bench tool was generated via “make release”.

The following test sequence was executed:

Test 1. Bulk Load of keys in Random Order ( bulkload)

NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 bulkload

Measure performance to load 900 million keys into the database. The keys are inserted in random order. The database is empty at the beginning of this benchmark run and gradually fills up. No data is being read when the data load is in progress.

Test 2. Random Write ( overwrite)

NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 DURATION=5400 overwrite

Measure performance to randomly overwrite keys into the database. The database was first created by the previous benchmark.

Test 3. Multi-threaded read and single-threaded write ( readwhilewriting)

NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 DURATION=5400 readwhilewriting

Measure performance to randomly read keys and ongoing updates to existing keys. The database from Test #2 was used as the starting point.

Test 4. Random Read ( randomread)

NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 DURATION=5400 randomread

Measure random read performance of a database.

The following shows results of these tests using various releases and parameters.

Scenario 1: RocksDB 6.10, Different Block Sizes

The test cases were executed with various block sizes. The Direct I/O (DIO) test was executed with an 8K block size. In the “RL” tests, a timed rate-limited operation was place before the reported operation. For example, between the “bulkload” and “overwrite” operations, a 30-minute “rate-limited overwrite (limited at 2MB/sec) was conducted. This timed operation was meant as a means to help guarantee any flush or other background operation happened before the “timed reported” operation, thereby creating more predicatability in the percentile perforamnce numbers.

Test Case 1 : bulkload

8K: Complete bulkload in 4560 seconds 4K: Complete bulkload in 5215 seconds 16K: Complete bulkload in 3996 seconds DIO: Complete bulkload in 4547 seconds 8K RL: Complete bulkload in 4388 seconds

Blockops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99UptimeStall-timeStall%du -s - k
8K RL989786396.50.2159.4159.41.0179.

Test Case 2 : overwrite

Blockops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99Stall-timeStall%du -s -k
8K RL8554234.30.1161.2757.84.7143.6748.1340.5735.8118523085159137540100:08:18.3599.2

Test Case 3 : readwhilewriting

Blockops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99Stall-timeStall%du -s -k
8K RL10159831.

Test Case 4 : randomread

Blockops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99du =s -k
8K RL10579033.

Scenario 2: RocksDB 6.10, 2K Value size, 100M Keys.

The test cases were executed with the default block size and a value size of 2K. Only 100M keys were written to the database. Complete bulkload in 2018 seconds

Testops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99UptimeStall-timeStall%du -s -k

Scenario 3: Different Versions of RocksDB

These tests were executed against different versions of RocksDB, by checking out the corresponding branch and doing a “make release”.

Test Case 1 : NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 bulkload

6.10.0: Complete bulkload in 4560 seconds 6.3.6: Complete bulkload in 4584 seconds 6.0.2: Complete bulkload in 4668 seconds

Versionops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99UptimeStall-timeStall%du -s -k

Test Case 2 : NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 DURATION=5400 overwrite

Versionops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99Stall-timeStall%du -s -k

Test Case 3 : NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 DURATION=5400 readwhilewriting

Versionops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99Stall-timeStall%du -s -k

Test Case 4 : NUM_KEYS=900000000 NUM_THREADS=32 CACHE_SIZE=6442450944 DURATION=5400 readrandom

Versionops/secmb/secSize-GBL0_GBSum_GBW-AmpW-MB/susec/opp50p75p99p99.9p99.99du -s -k


fio test results

  1. ]$ fio --randrepeat=1 --ioengine=sync --direct=1 --gtod_reduce=1 --name=test --filename=/data/test_file --bs=4k --iodepth=64 --size=4G --readwrite=randread --numjobs=32 --group_reporting
  2. test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=64
  3. ...
  4. fio-2.14
  5. Starting 32 processes
  6. Jobs: 3 (f=3): [_(3),r(1),_(1),E(1),_(10),r(1),_(13),r(1),E(1)] [100.0% done] [445.3MB/0KB/0KB /s] [114K/0/0 iops] [eta 00m:00s]
  7. test: (groupid=0, jobs=32): err= 0: pid=28042: Fri Jul 24 01:36:19 2020
  8. read : io=131072MB, bw=469326KB/s, iops=117331, runt=285980msec
  9. cpu : usr=1.29%, sys=3.26%, ctx=33585114, majf=0, minf=297
  10. IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
  11. submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  12. complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  13. issued : total=r=33554432/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
  14. latency : target=0, window=0, percentile=100.00%, depth=64
  15. Run status group 0 (all jobs):
  16. READ: io=131072MB, aggrb=469325KB/s, minb=469325KB/s, maxb=469325KB/s, mint=285980msec, maxt=285980msec
  17. Disk stats (read/write):
  18. nvme1n1: ios=33654742/61713, merge=0/40, ticks=8723764/89064, in_queue=8788592, util=100.00%
  1. ]$ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/data/test_file --bs=4k --iodepth=64 --size=4G --readwrite=randread
  2. test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
  3. fio-2.14
  4. Starting 1 process
  5. Jobs: 1 (f=1): [r(1)] [100.0% done] [456.3MB/0KB/0KB /s] [117K/0/0 iops] [eta 00m:00s]
  6. test: (groupid=0, jobs=1): err= 0: pid=28385: Fri Jul 24 01:36:56 2020
  7. read : io=4096.0MB, bw=547416KB/s, iops=136854, runt= 7662msec
  8. cpu : usr=22.20%, sys=48.81%, ctx=144112, majf=0, minf=73
  9. IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
  10. submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
  11. complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
  12. issued : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
  13. latency : target=0, window=0, percentile=100.00%, depth=64
  14. Run status group 0 (all jobs):
  15. READ: io=4096.0MB, aggrb=547416KB/s, minb=547416KB/s, maxb=547416KB/s, mint=7662msec, maxt=7662msec
  16. Disk stats (read/write):
  17. nvme1n1: ios=1050868/1904, merge=0/1, ticks=374836/2900, in_queue=370532, util=98.70%

Previous Results