db_bench

db_bench is the main tool that is used to benchmark RocksDB’s performance. RocksDB inherited db_bench from LevelDB, and enhanced it to support many additional options. db_bench supports many benchmarks to generate different types of workloads, and its various options can be used to control the tests.

If you are just getting started with db_bench, here are a few things you can try:

  1. Start with a simple benchmark like fillseq (or fillrandom) to create a database and fill it with some data
  1. ./db_bench --benchmarks="fillseq"

If you want more stats, add the meta operator “stats” and —statistics flag.

  1. ./db_bench --benchmarks="fillseq,stats" --statistics
  1. Read the data back
  1. ./db_bench --benchmarks="readrandom" --use_existing_db

You can also combine multiple benchmarks to the string that is passed to --benchmarks so that they run sequentially. Example:

  1. ./db_bench --benchmarks="fillseq,readrandom,readseq"

More in-depth example of db_bench usage can be found here and here.

Benchmarks List:

  1. fillseq -- write N values in sequential key order in async mode
  2. fillseqdeterministic -- write N values in the specified key order and keep the shape of the LSM tree
  3. fillrandom -- write N values in random key order in async mode
  4. filluniquerandomdeterministic -- write N values in a random key order and keep the shape of the LSM tree
  5. overwrite -- overwrite N values in random key order in async mode
  6. fillsync -- write N/100 values in random key order in sync mode
  7. fill100K -- write N/1000 100K values in random order in async mode
  8. deleteseq -- delete N keys in sequential order
  9. deleterandom -- delete N keys in random order
  10. readseq -- read N times sequentially
  11. readtocache -- 1 thread reading database sequentially
  12. readreverse -- read N times in reverse order
  13. readrandom -- read N times in random order
  14. readmissing -- read N missing keys in random order
  15. readwhilewriting -- 1 writer, N threads doing random reads
  16. readwhilemerging -- 1 merger, N threads doing random reads
  17. readrandomwriterandom -- N threads doing random-read, random-write
  18. prefixscanrandom -- prefix scan N times in random order
  19. updaterandom -- N threads doing read-modify-write for random keys
  20. appendrandom -- N threads doing read-modify-write with growing values
  21. mergerandom -- same as updaterandom/appendrandom using merge operator. Must be used with merge_operator
  22. readrandommergerandom -- perform N random read-or-merge operations. Must be used with merge_operator
  23. newiterator -- repeated iterator creation
  24. seekrandom -- N random seeks, call Next seek_nexts times per seek
  25. seekrandomwhilewriting -- seekrandom and 1 thread doing overwrite
  26. seekrandomwhilemerging -- seekrandom and 1 thread doing merge
  27. crc32c -- repeated crc32c of 4K of data
  28. xxhash -- repeated xxHash of 4K of data
  29. acquireload -- load N*1000 times
  30. fillseekseq -- write N values in sequential key, then read them by seeking to each key
  31. randomtransaction -- execute N random transactions and verify correctness
  32. randomreplacekeys -- randomly replaces N keys by deleting the old version and putting the new version
  33. timeseries -- 1 writer generates time series data and multiple readers doing random reads on id

For a list of all options:

  1. $ ./db_bench -help
  2. db_bench:
  3. USAGE:
  4. ./db_bench [OPTIONS]...
  5. Flags from tools/db_bench_tool.cc:
  6. -advise_random_on_open (Advise random access on table file open) type: bool
  7. default: true
  8. -allow_concurrent_memtable_write (Allow multi-writers to update mem tables
  9. in parallel.) type: bool default: true
  10. -base_background_compactions (The base number of concurrent background
  11. compactions to occur in parallel.) type: int32 default: 1
  12. -batch_size (Batch size) type: int64 default: 1
  13. -benchmark_read_rate_limit (If non-zero, db_bench will rate-limit the reads
  14. from RocksDB. This is the global rate in ops/second.) type: uint64
  15. default: 0
  16. -benchmark_write_rate_limit (If non-zero, db_bench will rate-limit the
  17. writes going into RocksDB. This is the global rate in bytes/second.)
  18. type: uint64 default: 0
  19. -benchmarks (Comma-separated list of operations to run in the specified
  20. order. Available benchmarks:
  21. fillseq -- write N values in sequential key order in async mode
  22. fillseqdeterministic -- write N values in the specified key order
  23. and keep the shape of the LSM tree
  24. fillrandom -- write N values in random key order in async mode
  25. filluniquerandomdeterministic -- write N values in a random key
  26. order and keep the shape of the LSM tree
  27. overwrite -- overwrite N values in random key order in async mode
  28. fillsync -- write N/100 values in random key order in sync mode
  29. fill100K -- write N/1000 100K values in random order in async mode
  30. deleteseq -- delete N keys in sequential order
  31. deleterandom -- delete N keys in random order
  32. readseq -- read N times sequentially
  33. readtocache -- 1 thread reading database sequentially
  34. readreverse -- read N times in reverse order
  35. readrandom -- read N times in random order
  36. readmissing -- read N missing keys in random order
  37. readwhilewriting -- 1 writer, N threads doing random reads
  38. readwhilemerging -- 1 merger, N threads doing random reads
  39. readrandomwriterandom -- N threads doing random-read, random-write
  40. prefixscanrandom -- prefix scan N times in random order
  41. updaterandom -- N threads doing read-modify-write for random keys
  42. appendrandom -- N threads doing read-modify-write with growing values
  43. mergerandom -- same as updaterandom/appendrandom using merge operator.
  44. Must be used with merge_operator
  45. readrandommergerandom -- perform N random read-or-merge operations. Must
  46. be used with merge_operator
  47. newiterator -- repeated iterator creation
  48. seekrandom -- N random seeks, call Next seek_nexts times per seek
  49. seekrandomwhilewriting -- seekrandom and 1 thread doing overwrite
  50. seekrandomwhilemerging -- seekrandom and 1 thread doing merge
  51. crc32c -- repeated crc32c of 4K of data
  52. xxhash -- repeated xxHash of 4K of data
  53. acquireload -- load N*1000 times
  54. fillseekseq -- write N values in sequential key, then read them by
  55. seeking to each key
  56. randomtransaction -- execute N random transactions and verify
  57. correctness
  58. randomreplacekeys -- randomly replaces N keys by deleting the old
  59. version and putting the new version
  60. timeseries -- 1 writer generates time series data and
  61. multiple readers doing random reads on id
  62. Meta operations:
  63. compact -- Compact the entire DB
  64. stats -- Print DB stats
  65. resetstats -- Reset DB stats
  66. levelstats -- Print the number of files and bytes per level
  67. sstables -- Print sstable info
  68. heapprofile -- Dump a heap profile (if supported by this port)
  69. ) type: string
  70. default: "fillseq,fillseqdeterministic,fillsync,fillrandom,filluniquerandomdeterministic,overwrite,readrandom,newiterator,newiteratorwhilewriting,seekrandom,seekrandomwhilewriting,seekrandomwhilemerging,readseq,readreverse,compact,readrandom,multireadrandom,readseq,readtocache,readreverse,readwhilewriting,readwhilemerging,readrandomwriterandom,updaterandom,randomwithverify,fill100K,crc32c,xxhash,compress,uncompress,acquireload,fillseekseq,randomtransaction,randomreplacekeys,timeseries"
  71. -block_restart_interval (Number of keys between restart points for delta
  72. encoding of keys in data block.) type: int32 default: 16
  73. -block_size (Number of bytes in a block.) type: int32 default: 4096
  74. -bloom_bits (Bloom filter bits per key. Negative means use default
  75. settings.) type: int32 default: -1
  76. -bloom_locality (Control bloom filter probes locality) type: int32
  77. default: 0
  78. -bytes_per_sync (Allows OS to incrementally sync SST files to disk while
  79. they are being written, in the background. Issue one request for every
  80. bytes_per_sync written. 0 turns it off.) type: uint64 default: 0
  81. -cache_high_pri_pool_ratio (Ratio of block cache reserve for high pri
  82. blocks. If > 0.0, we also enable
  83. cache_index_and_filter_blocks_with_high_priority.) type: double
  84. default: 0
  85. -cache_index_and_filter_blocks (Cache index/filter blocks in block cache.)
  86. type: bool default: false
  87. -cache_numshardbits (Number of shards for the block cache is 2 **
  88. cache_numshardbits. Negative means use default settings. This is applied
  89. only if FLAGS_cache_size is non-negative.) type: int32 default: 6
  90. -cache_size (Number of bytes to use as a cache of uncompressed data)
  91. type: int64 default: 8388608
  92. -compaction_fadvice (Access pattern advice when a file is compacted)
  93. type: string default: "NORMAL"
  94. -compaction_pri (priority of files to compaction: by size or by data age)
  95. type: int32 default: 0
  96. -compaction_readahead_size (Compaction readahead size) type: int32
  97. default: 0
  98. -compaction_style (style of compaction: level-based, universal and fifo)
  99. type: int32 default: 0
  100. -compressed_cache_size (Number of bytes to use as a cache of compressed
  101. data.) type: int64 default: -1
  102. -compression_level (Compression level. For zlib this should be -1 for the
  103. default level, or between 0 and 9.) type: int32 default: -1
  104. -compression_max_dict_bytes (Maximum size of dictionary used to prime the
  105. compression library.) type: int32 default: 0
  106. -compression_ratio (Arrange to generate values that shrink to this fraction
  107. of their original size after compression) type: double default: 0.5
  108. -compression_type (Algorithm to use to compress the database) type: string
  109. default: "snappy"
  110. -cuckoo_hash_ratio (Hash ratio for Cuckoo SST table.) type: double
  111. default: 0.90000000000000002
  112. -db (Use the db with the following name.) type: string default: ""
  113. -db_write_buffer_size (Number of bytes to buffer in all memtables before
  114. compacting) type: int64 default: 0
  115. -delayed_write_rate (Limited bytes allowed to DB when soft_rate_limit or
  116. level0_slowdown_writes_trigger triggers) type: uint64 default: 8388608
  117. -delete_obsolete_files_period_micros (Ignored. Left here for backward
  118. compatibility) type: uint64 default: 0
  119. -deletepercent (Percentage of deletes out of reads/writes/deletes (used in
  120. RandomWithVerify only). RandomWithVerify calculates writepercent as (100
  121. - FLAGS_readwritepercent - deletepercent), so deletepercent must be
  122. smaller than (100 - FLAGS_readwritepercent)) type: int32 default: 2
  123. -deletes (Number of delete operations to do. If negative, do FLAGS_num
  124. deletions.) type: int64 default: -1
  125. -disable_auto_compactions (Do not auto trigger compactions) type: bool
  126. default: false
  127. -disable_seek_compaction (Not used, left here for backwards compatibility)
  128. type: int32 default: 0
  129. -disable_wal (If true, do not write WAL for write.) type: bool
  130. default: false
  131. -dump_malloc_stats (Dump malloc stats in LOG ) type: bool default: true
  132. -duration (Time in seconds for the random-ops tests to run. When 0 then num
  133. & reads determine the test duration) type: int32 default: 0
  134. -enable_io_prio (Lower the background flush/compaction threads' IO
  135. priority) type: bool default: false
  136. -enable_numa (Make operations aware of NUMA architecture and bind memory
  137. and cpus corresponding to nodes together. In NUMA, memory in same node as
  138. CPUs are closer when compared to memory in other nodes. Reads can be
  139. faster when the process is bound to CPU and memory of same node. Use
  140. "$numactl --hardware" command to see NUMA memory architecture.)
  141. type: bool default: false
  142. -enable_pipelined_write (Allow WAL and memtable writes to be pipelined)
  143. type: bool default: true
  144. -enable_write_thread_adaptive_yield (Use a yielding spin loop for brief
  145. writer thread waits.) type: bool default: true
  146. -env_uri (URI for registry Env lookup. Mutually exclusive with --hdfs.)
  147. type: string default: ""
  148. -expand_range_tombstones (Expand range tombstone into sequential regular
  149. tombstones.) type: bool default: false
  150. -expire_style (Style to remove expired time entries. Can be one of the
  151. options below: none (do not expired data), compaction_filter (use a
  152. compaction filter to remove expired data), delete (seek IDs and remove
  153. expired data) (used in TimeSeries only).) type: string default: "none"
  154. -fifo_compaction_allow_compaction (Allow compaction in FIFO compaction.)
  155. type: bool default: true
  156. -fifo_compaction_max_table_files_size_mb (The limit of total table file
  157. sizes to trigger FIFO compaction) type: uint64 default: 0
  158. -file_opening_threads (If open_files is set to -1, this option set the
  159. number of threads that will be used to open files during DB::Open())
  160. type: int32 default: 16
  161. -finish_after_writes (Write thread terminates after all writes are
  162. finished) type: bool default: false
  163. -hard_pending_compaction_bytes_limit (Stop writes if pending compaction
  164. bytes exceed this number) type: uint64 default: 137438953472
  165. -hard_rate_limit (DEPRECATED) type: double default: 0
  166. -hash_bucket_count (hash bucket count) type: int64 default: 1048576
  167. -hdfs (Name of hdfs environment. Mutually exclusive with --env_uri.)
  168. type: string default: ""
  169. -histogram (Print histogram of operation timings) type: bool default: false
  170. -identity_as_first_hash (the first hash function of cuckoo table becomes an
  171. identity function. This is only valid when key is 8 bytes) type: bool
  172. default: false
  173. -index_block_restart_interval (Number of keys between restart points for
  174. delta encoding of keys in index block.) type: int32 default: 1
  175. -key_id_range (Range of possible value of key id (used in TimeSeries
  176. only).) type: int32 default: 100000
  177. -key_size (size of each key) type: int32 default: 16
  178. -keys_per_prefix (control average number of keys generated per prefix, 0
  179. means no special handling of the prefix, i.e. use the prefix comes with
  180. the generated random number.) type: int64 default: 0
  181. -level0_file_num_compaction_trigger (Number of files in level-0 when
  182. compactions start) type: int32 default: 4
  183. -level0_slowdown_writes_trigger (Number of files in level-0 that will slow
  184. down writes.) type: int32 default: 20
  185. -level0_stop_writes_trigger (Number of files in level-0 that will trigger
  186. put stop.) type: int32 default: 36
  187. -level_compaction_dynamic_level_bytes (Whether level size base is dynamic)
  188. type: bool default: false
  189. -max_background_compactions (The maximum number of concurrent background
  190. compactions that can occur in parallel.) type: int32 default: 1
  191. -max_background_flushes (The maximum number of concurrent background
  192. flushes that can occur in parallel.) type: int32 default: 1
  193. -max_bytes_for_level_base (Max bytes for level-1) type: uint64
  194. default: 268435456
  195. -max_bytes_for_level_multiplier (A multiplier to compute max bytes for
  196. level-N (N >= 2)) type: double default: 10
  197. -max_bytes_for_level_multiplier_additional (A vector that specifies
  198. additional fanout per level) type: string default: ""
  199. -max_compaction_bytes (Max bytes allowed in one compaction) type: uint64
  200. default: 0
  201. -max_num_range_tombstones (Maximum number of range tombstones to insert.)
  202. type: int64 default: 0
  203. -max_successive_merges (Maximum number of successive merge operations on a
  204. key in the memtable) type: int32 default: 0
  205. -max_total_wal_size (Set total max WAL size) type: uint64 default: 0
  206. -max_write_buffer_number (The number of in-memory memtables. Each memtable
  207. is of sizewrite_buffer_size.) type: int32 default: 2
  208. -max_write_buffer_number_to_maintain (The total maximum number of write
  209. buffers to maintain in memory including copies of buffers that have
  210. already been flushed. Unlike max_write_buffer_number, this parameter does
  211. not affect flushing. This controls the minimum amount of write history
  212. that will be available in memory for conflict checking when Transactions
  213. are used. If this value is too low, some transactions may fail at commit
  214. time due to not being able to determine whether there were any write
  215. conflicts. Setting this value to 0 will cause write buffers to be freed
  216. immediately after they are flushed. If this value is set to -1,
  217. 'max_write_buffer_number' will be used.) type: int32 default: 0
  218. -memtable_bloom_size_ratio (Ratio of memtable size used for bloom filter. 0
  219. means no bloom filter.) type: double default: 0
  220. -memtable_insert_with_hint_prefix_size (If non-zero, enable memtable insert
  221. with hint with the given prefix size.) type: int32 default: 0
  222. -memtable_use_huge_page (Try to use huge page in memtables.) type: bool
  223. default: false
  224. -memtablerep () type: string default: "skip_list"
  225. -merge_keys (Number of distinct keys to use for MergeRandom and
  226. ReadRandomMergeRandom. If negative, there will be FLAGS_num keys.)
  227. type: int64 default: -1
  228. -merge_operator (The merge operator to use with the database.If a new merge
  229. operator is specified, be sure to use fresh database The possible merge
  230. operators are defined in utilities/merge_operators.h) type: string
  231. default: ""
  232. -mergereadpercent (Ratio of merges to merges&reads (expressed as
  233. percentage) for the ReadRandomMergeRandom workload. The default value 70
  234. means 70% out of all read and merge operations are merges. In other
  235. words, 7 merges for every 3 gets.) type: int32 default: 70
  236. -min_level_to_compress (If non-negative, compression starts from this
  237. level. Levels with number < min_level_to_compress are not compressed.
  238. Otherwise, apply compression_type to all levels.) type: int32 default: -1
  239. -min_write_buffer_number_to_merge (The minimum number of write buffers that
  240. will be merged togetherbefore writing to storage. This is cheap because
  241. it is anin-memory merge. If this feature is not enabled, then all
  242. thesewrite buffers are flushed to L0 as separate files and this increases
  243. read amplification because a get request has to check in all of these
  244. files. Also, an in-memory merge may result in writing less data to
  245. storage if there are duplicate records in each of these individual write
  246. buffers.) type: int32 default: 1
  247. -mmap_read (Allow reads to occur via mmap-ing files) type: bool
  248. default: false
  249. -mmap_write (Allow writes to occur via mmap-ing files) type: bool
  250. default: false
  251. -new_table_reader_for_compaction_inputs (If true, uses a separate file
  252. handle for compaction inputs) type: int32 default: 1
  253. -num (Number of key/values to place in database) type: int64
  254. default: 1000000
  255. -num_column_families (Number of Column Families to use.) type: int32
  256. default: 1
  257. -num_deletion_threads (Number of threads to do deletion (used in TimeSeries
  258. and delete expire_style only).) type: int32 default: 1
  259. -num_hot_column_families (Number of Hot Column Families. If more than 0,
  260. only write to this number of column families. After finishing all the
  261. writes to them, create new set of column families and insert to them.
  262. Only used when num_column_families > 1.) type: int32 default: 0
  263. -num_levels (The total number of levels) type: int32 default: 7
  264. -num_multi_db (Number of DBs used in the benchmark. 0 means single DB.)
  265. type: int32 default: 0
  266. -numdistinct (Number of distinct keys to use. Used in RandomWithVerify to
  267. read/write on fewer keys so that gets are more likely to find the key and
  268. puts are more likely to update the same key) type: int64 default: 1000
  269. -open_files (Maximum number of files to keep open at the same time (use
  270. default if == 0)) type: int32 default: -1
  271. -optimistic_transaction_db (Open a OptimisticTransactionDB instance.
  272. Required for randomtransaction benchmark.) type: bool default: false
  273. -optimize_filters_for_hits (Optimizes bloom filters for workloads for most
  274. lookups return a value. For now this doesn't create bloom filters for the
  275. max level of the LSM to reduce metadata that should fit in RAM. )
  276. type: bool default: false
  277. -options_file (The path to a RocksDB options file. If specified, then
  278. db_bench will run with the RocksDB options in the default column family
  279. of the specified options file. Note that with this setting, db_bench will
  280. ONLY accept the following RocksDB options related command-line arguments,
  281. all other arguments that are related to RocksDB options will be ignored:
  282. --use_existing_db
  283. --statistics
  284. --row_cache_size
  285. --row_cache_numshardbits
  286. --enable_io_prio
  287. --dump_malloc_stats
  288. --num_multi_db
  289. ) type: string default: ""
  290. -perf_level (Level of perf collection) type: int32 default: 1
  291. -pin_l0_filter_and_index_blocks_in_cache (Pin index/filter blocks of L0
  292. files in block cache.) type: bool default: false
  293. -pin_slice (use pinnable slice for point lookup) type: bool default: true
  294. -prefix_size (control the prefix size for HashSkipList and plain table)
  295. type: int32 default: 0
  296. -random_access_max_buffer_size (Maximum windows randomaccess buffer size)
  297. type: int32 default: 1048576
  298. -range_tombstone_width (Number of keys in tombstone's range) type: int64
  299. default: 100
  300. -rate_limit_delay_max_milliseconds (When hard_rate_limit is set then this
  301. is the max time a put will be stalled.) type: int32 default: 1000
  302. -rate_limiter_bytes_per_sec (Set options.rate_limiter value.) type: uint64
  303. default: 0
  304. -read_amp_bytes_per_bit (Number of bytes per bit to be used in block
  305. read-amp bitmap) type: int32 default: 0
  306. -read_cache_direct_read (Whether to use Direct IO for reading from read
  307. cache) type: bool default: true
  308. -read_cache_direct_write (Whether to use Direct IO for writing to the read
  309. cache) type: bool default: true
  310. -read_cache_path (If not empty string, a read cache will be used in this
  311. path) type: string default: ""
  312. -read_cache_size (Maximum size of the read cache) type: int64
  313. default: 4294967296
  314. -read_random_exp_range (Read random's key will be generated using
  315. distribution of num * exp(-r) where r is uniform number from 0 to this
  316. value. The larger the number is, the more skewed the reads are. Only used
  317. in readrandom and multireadrandom benchmarks.) type: double default: 0
  318. -readonly (Run read only benchmarks.) type: bool default: false
  319. -reads (Number of read operations to do. If negative, do FLAGS_num reads.)
  320. type: int64 default: -1
  321. -readwritepercent (Ratio of reads to reads/writes (expressed as percentage)
  322. for the ReadRandomWriteRandom workload. The default value 90 means 90%
  323. operations out of all reads and writes operations are reads. In other
  324. words, 9 gets for every 1 put.) type: int32 default: 90
  325. -report_bg_io_stats (Measure times spents on I/Os while in compactions. )
  326. type: bool default: false
  327. -report_file (Filename where some simple stats are reported to (if
  328. --report_interval_seconds is bigger than 0)) type: string
  329. default: "report.csv"
  330. -report_file_operations (if report number of file operations) type: bool
  331. default: false
  332. -report_interval_seconds (If greater than zero, it will write simple stats
  333. in CVS format to --report_file every N seconds) type: int64 default: 0
  334. -reverse_iterator (When true use Prev rather than Next for iterators that
  335. do Seek and then Next) type: bool default: false
  336. -row_cache_size (Number of bytes to use as a cache of individual rows (0 =
  337. disabled).) type: int64 default: 0
  338. -seed (Seed base for random number generators. When 0 it is deterministic.)
  339. type: int64 default: 0
  340. -seek_nexts (How many times to call Next() after Seek() in fillseekseq,
  341. seekrandom, seekrandomwhilewriting and seekrandomwhilemerging)
  342. type: int32 default: 0
  343. -show_table_properties (If true, then per-level table properties will be
  344. printed on every stats-interval when stats_interval is set and
  345. stats_per_interval is on.) type: bool default: false
  346. -simcache_size (Number of bytes to use as a simcache of uncompressed data.
  347. Nagative value disables simcache.) type: int64 default: -1
  348. -skip_list_lookahead (Used with skip_list memtablerep; try linear search
  349. first for this many steps from the previous position) type: int32
  350. default: 0
  351. -soft_pending_compaction_bytes_limit (Slowdown writes if pending compaction
  352. bytes exceed this number) type: uint64 default: 68719476736
  353. -soft_rate_limit (DEPRECATED) type: double default: 0
  354. -statistics (Database statistics) type: bool default: false
  355. -statistics_string (Serialized statistics string) type: string default: ""
  356. -stats_interval (Stats are reported every N operations when this is greater
  357. than zero. When 0 the interval grows over time.) type: int64 default: 0
  358. -stats_interval_seconds (Report stats every N seconds. This overrides
  359. stats_interval when both are > 0.) type: int64 default: 0
  360. -stats_per_interval (Reports additional stats per interval when this is
  361. greater than 0.) type: int32 default: 0
  362. -stddev (Standard deviation of normal distribution used for picking keys
  363. (used in RandomReplaceKeys only).) type: double default: 2000
  364. -subcompactions (Maximum number of subcompactions to divide L0-L1
  365. compactions into.) type: uint64 default: 1
  366. -sync (Sync all writes to disk) type: bool default: false
  367. -table_cache_numshardbits () type: int32 default: 4
  368. -target_file_size_base (Target file size at level-1) type: int64
  369. default: 67108864
  370. -target_file_size_multiplier (A multiplier to compute target level-N file
  371. size (N >= 2)) type: int32 default: 1
  372. -thread_status_per_interval (Takes and report a snapshot of the current
  373. status of each thread when this is greater than 0.) type: int32
  374. default: 0
  375. -threads (Number of concurrent threads to run.) type: int32 default: 1
  376. -time_range (Range of timestamp that store in the database (used in
  377. TimeSeries only).) type: uint64 default: 100000
  378. -transaction_db (Open a TransactionDB instance. Required for
  379. randomtransaction benchmark.) type: bool default: false
  380. -transaction_lock_timeout (If using a transaction_db, specifies the lock
  381. wait timeout in milliseconds before failing a transaction waiting on a
  382. lock) type: uint64 default: 100
  383. -transaction_set_snapshot (Setting to true will have each transaction call
  384. SetSnapshot() upon creation.) type: bool default: false
  385. -transaction_sets (Number of keys each transaction will modify (use in
  386. RandomTransaction only). Max: 9999) type: uint64 default: 2
  387. -transaction_sleep (Max microseconds to sleep in between reading and
  388. writing a value (used in RandomTransaction only). ) type: int32
  389. default: 0
  390. -truth_db (Truth key/values used when using verify) type: string
  391. default: "/dev/shm/truth_db/dbbench"
  392. -universal_allow_trivial_move (Allow trivial move in universal compaction.)
  393. type: bool default: false
  394. -universal_compression_size_percent (The percentage of the database to
  395. compress for universal compaction. -1 means compress everything.)
  396. type: int32 default: -1
  397. -universal_max_merge_width (The max number of files to compact in universal
  398. style compaction) type: int32 default: 0
  399. -universal_max_size_amplification_percent (The max size amplification for
  400. universal style compaction) type: int32 default: 0
  401. -universal_min_merge_width (The minimum number of files in a single
  402. compaction run (for universal compaction only).) type: int32 default: 0
  403. -universal_size_ratio (Percentage flexibility while comparing file size
  404. (for universal compaction only).) type: int32 default: 0
  405. -use_adaptive_mutex (Use adaptive mutex) type: bool default: false
  406. -use_blob_db (Open a BlobDB instance. Required for largevalue benchmark.)
  407. type: bool default: false
  408. -use_block_based_filter (if use kBlockBasedFilter instead of kFullFilter
  409. for filter block. This is valid if only we use BlockTable) type: bool
  410. default: false
  411. -use_clock_cache (Replace default LRU block cache with clock cache.)
  412. type: bool default: false
  413. -use_cuckoo_table (if use cuckoo table format) type: bool default: false
  414. -use_direct_io_for_flush_and_compaction (Use O_DIRECT for background flush
  415. and compaction I/O) type: bool default: false
  416. -use_direct_reads (Use O_DIRECT for reading data) type: bool default: false
  417. -use_existing_db (If true, do not destroy the existing database. If you
  418. set this flag and also specify a benchmark that wants a fresh database,
  419. that benchmark will fail.) type: bool default: false
  420. -use_fsync (If true, issue fsync instead of fdatasync) type: bool
  421. default: false
  422. -use_hash_search (if use kHashSearch instead of kBinarySearch. This is
  423. valid if only we use BlockTable) type: bool default: false
  424. -use_plain_table (if use plain table instead of block-based table format)
  425. type: bool default: false
  426. -use_single_deletes (Use single deletes (used in RandomReplaceKeys only).)
  427. type: bool default: true
  428. -use_stderr_info_logger (Write info logs to stderr instead of to LOG file.
  429. ) type: bool default: false
  430. -use_tailing_iterator (Use tailing iterator to access a series of keys
  431. instead of get) type: bool default: false
  432. -use_uint64_comparator (use Uint64 user comparator) type: bool
  433. default: false
  434. -value_size (Size of each value) type: int32 default: 100
  435. -verify_checksum (Verify checksum for every block read from storage)
  436. type: bool default: false
  437. -wal_bytes_per_sync (Allows OS to incrementally sync WAL files to disk
  438. while they are being written, in the background. Issue one request for
  439. every wal_bytes_per_sync written. 0 turns it off.) type: uint64
  440. default: 0
  441. -wal_dir (If not empty, use the given dir for WAL) type: string default: ""
  442. -wal_size_limit_MB (Set the size limit for the WAL Files in MB.)
  443. type: uint64 default: 0
  444. -wal_ttl_seconds (Set the TTL for the WAL Files in seconds.) type: uint64
  445. default: 0
  446. -writable_file_max_buffer_size (Maximum write buffer for Writable File)
  447. type: int32 default: 1048576
  448. -write_buffer_size (Number of bytes to buffer in memtable before
  449. compacting) type: int64 default: 67108864
  450. -write_thread_max_yield_usec (Maximum microseconds for
  451. enable_write_thread_adaptive_yield operation.) type: uint64 default: 100
  452. -write_thread_slow_yield_usec (The threshold at which a slow yield is
  453. considered a signal that other processes or threads want the core.)
  454. type: uint64 default: 3
  455. -writes (Number of write operations to do. If negative, do --num reads.)
  456. type: int64 default: -1
  457. -writes_per_range_tombstone (Number of writes between range tombstones)
  458. type: int64 default: 0

cache_bench

  1. ./cache_bench --help
  2. cache_bench: Warning: SetUsageMessage() never called
  3. ...
  4. Flags from cache/cache_bench.cc:
  5. -cache_size (Number of bytes to use as a cache of uncompressed data.)
  6. type: int64 default: 8388608
  7. -erase_percent (Ratio of erase to total workload (expressed as a
  8. percentage)) type: int32 default: 10
  9. -insert_percent (Ratio of insert to total workload (expressed as a
  10. percentage)) type: int32 default: 40
  11. -lookup_percent (Ratio of lookup to total workload (expressed as a
  12. percentage)) type: int32 default: 50
  13. -max_key (Max number of key to place in cache) type: int64
  14. default: 1073741824
  15. -num_shard_bits (shard_bits.) type: int32 default: 4
  16. -ops_per_thread (Number of operations per thread.) type: uint64
  17. default: 1200000
  18. -populate_cache (Populate cache before operations) type: bool
  19. default: false
  20. -threads (Number of concurrent threads to run.) type: int32 default: 16
  21. -use_clock_cache () type: bool default: false

persistent_cache_bench

  1. $ ./persistent_cache_bench -help
  2. persistent_cache_bench:
  3. USAGE:
  4. ./persistent_cache_bench [OPTIONS]...
  5. ...
  6. Flags from utilities/persistent_cache/persistent_cache_bench.cc:
  7. -benchmark (Benchmark mode) type: bool default: false
  8. -cache_size (Cache size) type: uint64 default: 18446744073709551615
  9. -cache_type (Cache type. (block_cache, volatile, tiered)) type: string
  10. default: "block_cache"
  11. -enable_pipelined_writes (Enable async writes) type: bool default: false
  12. -iosize (Read IO size) type: int32 default: 4096
  13. -log_path (Path for the log file) type: string default: "/tmp/log"
  14. -nsec (nsec) type: int32 default: 10
  15. -nthread_read (Lookup threads) type: int32 default: 1
  16. -nthread_write (Insert threads) type: int32 default: 1
  17. -path (Path for cachefile) type: string default: "/tmp/microbench/blkcache"
  18. -volatile_cache_pct (Percentage of cache in memory tier.) type: int32
  19. default: 10
  20. -writer_iosize (File writer IO size) type: int32 default: 4096
  21. -writer_qdepth (File writer qdepth) type: int32 default: 1