db_bench
db_bench is the main tool that is used to benchmark RocksDB’s performance. RocksDB inherited db_bench from LevelDB, and enhanced it to support many additional options. db_bench supports many benchmarks to generate different types of workloads, and its various options can be used to control the tests.
If you are just getting started with db_bench, here are a few things you can try:
- Start with a simple benchmark like fillseq (or fillrandom) to create a database and fill it with some data
./db_bench --benchmarks="fillseq"
If you want more stats, add the meta operator “stats” and —statistics flag.
./db_bench --benchmarks="fillseq,stats" --statistics
- Read the data back
./db_bench --benchmarks="readrandom" --use_existing_db
You can also combine multiple benchmarks to the string that is passed to --benchmarks so that they run sequentially. Example:
./db_bench --benchmarks="fillseq,readrandom,readseq"
More in-depth example of db_bench usage can be found here and here.
Benchmarks List:
fillseq -- write N values in sequential key order in async modefillseqdeterministic -- write N values in the specified key order and keep the shape of the LSM treefillrandom -- write N values in random key order in async modefilluniquerandomdeterministic -- write N values in a random key order and keep the shape of the LSM treeoverwrite -- overwrite N values in random key order in async modefillsync -- write N/100 values in random key order in sync modefill100K -- write N/1000 100K values in random order in async modedeleteseq -- delete N keys in sequential orderdeleterandom -- delete N keys in random orderreadseq -- read N times sequentiallyreadtocache -- 1 thread reading database sequentiallyreadreverse -- read N times in reverse orderreadrandom -- read N times in random orderreadmissing -- read N missing keys in random orderreadwhilewriting -- 1 writer, N threads doing random readsreadwhilemerging -- 1 merger, N threads doing random readsreadrandomwriterandom -- N threads doing random-read, random-writeprefixscanrandom -- prefix scan N times in random orderupdaterandom -- N threads doing read-modify-write for random keysappendrandom -- N threads doing read-modify-write with growing valuesmergerandom -- same as updaterandom/appendrandom using merge operator. Must be used with merge_operatorreadrandommergerandom -- perform N random read-or-merge operations. Must be used with merge_operatornewiterator -- repeated iterator creationseekrandom -- N random seeks, call Next seek_nexts times per seekseekrandomwhilewriting -- seekrandom and 1 thread doing overwriteseekrandomwhilemerging -- seekrandom and 1 thread doing mergecrc32c -- repeated crc32c of 4K of dataxxhash -- repeated xxHash of 4K of dataacquireload -- load N*1000 timesfillseekseq -- write N values in sequential key, then read them by seeking to each keyrandomtransaction -- execute N random transactions and verify correctnessrandomreplacekeys -- randomly replaces N keys by deleting the old version and putting the new versiontimeseries -- 1 writer generates time series data and multiple readers doing random reads on id
For a list of all options:
$ ./db_bench -help
persistent_cache_bench
$ ./persistent_cache_bench -helppersistent_cache_bench:USAGE:./persistent_cache_bench [OPTIONS]......Flags from utilities/persistent_cache/persistent_cache_bench.cc:-benchmark (Benchmark mode) type: bool default: false-cache_size (Cache size) type: uint64 default: 18446744073709551615-cache_type (Cache type. (block_cache, volatile, tiered)) type: stringdefault: "block_cache"-enable_pipelined_writes (Enable async writes) type: bool default: false-iosize (Read IO size) type: int32 default: 4096-log_path (Path for the log file) type: string default: "/tmp/log"-nsec (nsec) type: int32 default: 10-nthread_read (Lookup threads) type: int32 default: 1-nthread_write (Insert threads) type: int32 default: 1-path (Path for cachefile) type: string default: "/tmp/microbench/blkcache"-volatile_cache_pct (Percentage of cache in memory tier.) type: int32default: 10-writer_iosize (File writer IO size) type: int32 default: 4096-writer_qdepth (File writer qdepth) type: int32 default: 1
