crushtool – CRUSH map manipulation tool

Synopsis

crushtool ( -d map | -c map.txt | –build –numosds numosds_layer1 | –test ) [ -o outfile ]

Description

crushtool is a utility that lets you create, compile, decompileand test CRUSH map files.

CRUSH is a pseudo-random data distribution algorithm that efficientlymaps input values (which, in the context of Ceph, correspond to PlacementGroups) across a heterogeneous, hierarchically structured device map.The algorithm was originally described in detail in the following paper(although it has evolved some since then):

  1. http://www.ssrc.ucsc.edu/Papers/weil-sc06.pdf

The tool has four modes of operation.

  • —compile|-c map.txt
  • will compile a plaintext map.txt into a binary map file.
  • —decompile|-d map
  • will take the compiled map and decompile it into a plaintext sourcefile, suitable for editing.
  • —build —num_osds {num-osds} layer1 …
  • will create map with the given layer structure. See below for adetailed explanation.
  • —test
  • will perform a dry run of a CRUSH mapping for a range of inputvalues [—min-x,—max-x] (default [0,1023]) which can bethought of as simulated Placement Groups. See below for a moredetailed explanation.

Unlike other Ceph tools, crushtool does not accept generic optionssuch as –debug-crush from the command line. They can, however, beprovided via the CEPH_ARGS environment variable. For instance, tosilence all output from the CRUSH subsystem:

  1. CEPH_ARGS="--debug-crush 0" crushtool ...

Running tests with –test

The test mode will use the input crush map ( as specified with -imap ) and perform a dry run of CRUSH mapping or random placement(if –simulate is set ). On completion, two kinds of reports can becreated.1) The –show-… option outputs human readable informationon stderr.2) The –output-csv option creates CSV files that aredocumented by the –help-output option.

Note: Each Placement Group (PG) has an integer ID which can be obtainedfrom ceph pg dump (for example PG 2.2f means pool id 2, PG id 32).The pool and PG IDs are combined by a function to get a value which isgiven to CRUSH to map it to OSDs. crushtool does not know about PGs orpools; it only runs simulations by mapping values in the range[—min-x,—max-x].

  • —show-statistics
  • Displays a summary of the distribution. For instance:
  1. rule 1 (metadata) num_rep 5 result size == 5: 1024/1024

shows that rule 1 which is named metadata successfullymapped 1024 values to result size == 5 devices when tryingto map them to num_rep 5 replicas. When it fails to provide therequired mapping, presumably because the number of tries mustbe increased, a breakdown of the failures is displayed. For instance:

  1. rule 1 (metadata) num_rep 10 result size == 8: 4/1024
  2. rule 1 (metadata) num_rep 10 result size == 9: 93/1024
  3. rule 1 (metadata) num_rep 10 result size == 10: 927/1024

shows that although num_rep 10 replicas were required, 4out of 1024 values ( 4/1024 ) were mapped to result size== 8 devices only.

  • —show-mappings
  • Displays the mapping of each value in the range [—min-x,—max-x].For instance:
  1. CRUSH rule 1 x 24 [11,6]

shows that value 24 is mapped to devices [11,6] by rule1.

  • —show-bad-mappings
  • Displays which value failed to be mapped to the required number ofdevices. For instance:
  1. bad mapping rule 1 x 781 num_rep 7 result [8,10,2,11,6,9]

shows that when rule 1 was required to map 7 devices, itcould map only six : [8,10,2,11,6,9].

  • —show-utilization
  • Displays the expected and actual utilization for each device, foreach number of replicas. For instance:
  1. device 0: stored : 951 expected : 853.333
  2. device 1: stored : 963 expected : 853.333
  3. ...

shows that device 0 stored 951 values and was expected to store 853.Implies –show-statistics.

  • —show-utilization-all
  • Displays the same as –show-utilization but does not suppressoutput when the weight of a device is zero.Implies –show-statistics.
  • —show-choose-tries
  • Displays how many attempts were needed to find a device mapping.For instance:
  1. 0: 95224
  2. 1: 3745
  3. 2: 2225
  4. ..

shows that 95224 mappings succeeded without retries, 3745mappings succeeded with one attempts, etc. There are as many rowsas the value of the –set-choose-total-tries option.

  • —output-csv
  • Creates CSV files (in the current directory) containing informationdocumented by –help-output. The files are named after the ruleused when collecting the statistics. For instance, if the rule: ‘metadata’ is used, the CSV files will be:
  1. metadata-absolute_weights.csv
  2. metadata-device_utilization.csv
  3. ...

The first line of the file shortly explains the column layout. Forinstance:

  1. metadata-absolute_weights.csv
  2. Device ID, Absolute Weight
  3. 0,1
  4. ...
  • —output-name NAME
  • Prepend NAME to the file names generated when –output-csvis specified. For instance –output-name FOO will createfiles:
  1. FOO-metadata-absolute_weights.csv
  2. FOO-metadata-device_utilization.csv
  3. ...

The –set-… options can be used to modify the tunables of theinput crush map. The input crush map is modified inmemory. For example:

  1. $ crushtool -i mymap --test --show-bad-mappings
  2. bad mapping rule 1 x 781 num_rep 7 result [8,10,2,11,6,9]

could be fixed by increasing the choose-total-tries as follows:

$ crushtool -i mymap –test

–show-bad-mappings –set-choose-total-tries 500

Building a map with –build

The build mode will generate hierarchical maps. The first argumentspecifies the number of devices (leaves) in the CRUSH hierarchy. Eachlayer describes how the layer (or devices) preceding it should begrouped.

Each layer consists of:

  1. bucket ( uniform | list | tree | straw | straw2 ) size

The bucket is the type of the buckets in the layer(e.g. “rack”). Each bucket name will be built by appending a uniquenumber to the bucket string (e.g. “rack0”, “rack1”…).

The second component is the type of bucket: straw should be usedmost of the time.

The third component is the maximum size of the bucket. A size of zeromeans a bucket of infinite capacity.

Example

Suppose we have two rows with two racks each and 20 nodes per rack. Supposeeach node contains 4 storage devices for Ceph OSD Daemons. This configurationallows us to deploy 320 Ceph OSD Daemons. Lets assume a 42U rack with 2U nodes,leaving an extra 2U for a rack switch.

To reflect our hierarchy of devices, nodes, racks and rows, we would executethe following:

  1. $ crushtool -o crushmap --build --num_osds 320 \
  2. node straw 4 \
  3. rack straw 20 \
  4. row straw 2 \
  5. root straw 0
  6. # id weight type name reweight
  7. -87 320 root root
  8. -85 160 row row0
  9. -81 80 rack rack0
  10. -1 4 node node0
  11. 0 1 osd.0 1
  12. 1 1 osd.1 1
  13. 2 1 osd.2 1
  14. 3 1 osd.3 1
  15. -2 4 node node1
  16. 4 1 osd.4 1
  17. 5 1 osd.5 1
  18. ...

CRUSH rules are created so the generated crushmap can betested. They are the same rules as the ones created by default whencreating a new Ceph cluster. They can be further edited with:

  1. # decompile
  2. crushtool -d crushmap -o map.txt
  3.  
  4. # edit
  5. emacs map.txt
  6.  
  7. # recompile
  8. crushtool -c map.txt -o crushmap

Reclassify

The reclassify function allows users to transition from older maps thatmaintain parallel hierarchies for OSDs of different types to a modern CRUSHmap that makes use of the device class feature. For more information,see http://docs.ceph.com/docs/master/rados/operations/crush-map-edits/#migrating-from-a-legacy-ssd-rule-to-device-classes.

Example output from –test

See https://github.com/ceph/ceph/blob/master/src/test/cli/crushtool/set-choose.tfor sample crushtool —test commands and output produced thereby.

Availability

crushtool is part of Ceph, a massively scalable, open-source, distributed storage system. Pleaserefer to the Ceph documentation at http://ceph.com/docs for moreinformation.

See also

ceph(8),osdmaptool(8),

Authors

John Wilkins, Sage Weil, Loic Dachary