Manually editing a CRUSH Map

Note

Manually editing the CRUSH map is considered an advancedadministrator operation. All CRUSH changes that arenecessary for the overwhelming majority of installations arepossible via the standard ceph CLI and do not require manualCRUSH map edits. If you have identified a use case wheremanual edits are necessary, consider contacting the Cephdevelopers so that future versions of Ceph can make thisunnecessary.

To edit an existing CRUSH map:

For details on setting the CRUSH map rule for a specific pool, see SetPool Values.

Get a CRUSH Map

To get the CRUSH map for your cluster, execute the following:

  1. ceph osd getcrushmap -o {compiled-crushmap-filename}

Ceph will output (-o) a compiled CRUSH map to the filename you specified. Sincethe CRUSH map is in a compiled form, you must decompile it first before you canedit it.

Decompile a CRUSH Map

To decompile a CRUSH map, execute the following:

  1. crushtool -d {compiled-crushmap-filename} -o {decompiled-crushmap-filename}

Sections

There are six main sections to a CRUSH Map.

  • tunables: The preamble at the top of the map described any _tunables_for CRUSH behavior that vary from the historical/legacy CRUSH behavior. Thesecorrect for old bugs, optimizations, or other changes in behavior that havebeen made over the years to improve CRUSH’s behavior.

  • devices: Devices are individual ceph-osd daemons that canstore data.

  • types: Bucket types define the types of buckets used inyour CRUSH hierarchy. Buckets consist of a hierarchical aggregationof storage locations (e.g., rows, racks, chassis, hosts, etc.) andtheir assigned weights.

  • buckets: Once you define bucket types, you must define each nodein the hierarchy, its type, and which devices or other nodes itcontains.

  • rules: Rules define policy about how data is distributed acrossdevices in the hierarchy.

  • choose_args: Choose_args are alternative weights associated withthe hierarchy that have been adjusted to optimize data placement. A singlechoose_args map can be used for the entire cluster, or one can becreated for each individual pool.

CRUSH Map Devices

Devices are individual ceph-osd daemons that can store data. Youwill normally have one defined here for each OSD daemon in yourcluster. Devices are identified by an id (a non-negative integer) anda name, normally osd.N where N is the device id.

Devices may also have a device class associated with them (e.g.,hdd or ssd), allowing them to be conveniently targeted by acrush rule.

  1. # devices
  2. device {num} {osd.name} [class {class}]

For example:

  1. # devices
  2. device 0 osd.0 class ssd
  3. device 1 osd.1 class hdd
  4. device 2 osd.2
  5. device 3 osd.3

In most cases, each device maps to a single ceph-osd daemon. Thisis normally a single storage device, a pair of devices (for example,one for data and one for a journal or metadata), or in some cases asmall RAID device.

CRUSH Map Bucket Types

The second list in the CRUSH map defines ‘bucket’ types. Buckets facilitatea hierarchy of nodes and leaves. Node (or non-leaf) buckets typically representphysical locations in a hierarchy. Nodes aggregate other nodes or leaves.Leaf buckets represent ceph-osd daemons and their corresponding storagemedia.

Tip

The term “bucket” used in the context of CRUSH means a node inthe hierarchy, i.e. a location or a piece of physical hardware. Itis a different concept from the term “bucket” when used in thecontext of RADOS Gateway APIs.

To add a bucket type to the CRUSH map, create a new line under your list ofbucket types. Enter type followed by a unique numeric ID and a bucket name.By convention, there is one leaf bucket and it is type 0; however, you maygive it any name you like (e.g., osd, disk, drive, storage, etc.):

  1. #types
  2. type {num} {bucket-name}

For example:

  1. # types
  2. type 0 osd
  3. type 1 host
  4. type 2 chassis
  5. type 3 rack
  6. type 4 row
  7. type 5 pdu
  8. type 6 pod
  9. type 7 room
  10. type 8 datacenter
  11. type 9 zone
  12. type 10 region
  13. type 11 root

CRUSH Map Bucket Hierarchy

The CRUSH algorithm distributes data objects among storage devices accordingto a per-device weight value, approximating a uniform probability distribution.CRUSH distributes objects and their replicas according to the hierarchicalcluster map you define. Your CRUSH map represents the available storagedevices and the logical elements that contain them.

To map placement groups to OSDs across failure domains, a CRUSH map defines ahierarchical list of bucket types (i.e., under #types in the generated CRUSHmap). The purpose of creating a bucket hierarchy is to segregate theleaf nodes by their failure domains, such as hosts, chassis, racks, powerdistribution units, pods, rows, rooms, and data centers. With the exception ofthe leaf nodes representing OSDs, the rest of the hierarchy is arbitrary, andyou may define it according to your own needs.

We recommend adapting your CRUSH map to your firms’s hardware naming conventionsand using instances names that reflect the physical hardware. Your namingpractice can make it easier to administer the cluster and troubleshootproblems when an OSD and/or other hardware malfunctions and the administratorneed access to physical hardware.

In the following example, the bucket hierarchy has a leaf bucket named osd,and two node buckets named host and rack respectively.

Manually editing a CRUSH Map - 图1

Note

The higher numbered rack bucket type aggregates the lowernumbered host bucket type.

Since leaf nodes reflect storage devices declared under the #devices listat the beginning of the CRUSH map, you do not need to declare them as bucketinstances. The second lowest bucket type in your hierarchy usually aggregatesthe devices (i.e., it’s usually the computer containing the storage media, anduses whatever term you prefer to describe it, such as “node”, “computer”,“server,” “host”, “machine”, etc.). In high density environments, it isincreasingly common to see multiple hosts/nodes per chassis. You should accountfor chassis failure too–e.g., the need to pull a chassis if a node fails mayresult in bringing down numerous hosts/nodes and their OSDs.

When declaring a bucket instance, you must specify its type, give it a uniquename (string), assign it a unique ID expressed as a negative integer (optional),specify a weight relative to the total capacity/capability of its item(s),specify the bucket algorithm (usually straw2), and the hash (usually 0,reflecting hash algorithm rjenkins1). A bucket may have one or more items.The items may consist of node buckets or leaves. Items may have a weight thatreflects the relative weight of the item.

You may declare a node bucket with the following syntax:

  1. [bucket-type] [bucket-name] {
  2. id [a unique negative numeric ID]
  3. weight [the relative capacity/capability of the item(s)]
  4. alg [the bucket type: uniform | list | tree | straw | straw2 ]
  5. hash [the hash type: 0 by default]
  6. item [item-name] weight [weight]
  7. }

For example, using the diagram above, we would define two host bucketsand one rack bucket. The OSDs are declared as items within the host buckets:

  1. host node1 {
  2. id -1
  3. alg straw2
  4. hash 0
  5. item osd.0 weight 1.00
  6. item osd.1 weight 1.00
  7. }
  8.  
  9. host node2 {
  10. id -2
  11. alg straw2
  12. hash 0
  13. item osd.2 weight 1.00
  14. item osd.3 weight 1.00
  15. }
  16.  
  17. rack rack1 {
  18. id -3
  19. alg straw2
  20. hash 0
  21. item node1 weight 2.00
  22. item node2 weight 2.00
  23. }

Note

In the foregoing example, note that the rack bucket does not containany OSDs. Rather it contains lower level host buckets, and includes thesum total of their weight in the item entry.

CRUSH Map Rules

CRUSH maps support the notion of ‘CRUSH rules’, which are the rules thatdetermine data placement for a pool. The default CRUSH map has a rule for eachpool. For large clusters, you will likely create many pools where each pool mayhave its own non-default CRUSH rule.

Note

In most cases, you will not need to modify the default rule. Whenyou create a new pool, by default the rule will be set to 0.

CRUSH rules define placement and replication strategies or distribution policiesthat allow you to specify exactly how CRUSH places object replicas. Forexample, you might create a rule selecting a pair of targets for 2-waymirroring, another rule for selecting three targets in two different datacenters for 3-way mirroring, and yet another rule for erasure coding over sixstorage devices. For a detailed discussion of CRUSH rules, refer toCRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data,and more specifically to Section 3.2.

A rule takes the following form:

  1. rule <rulename> {
  2.  
  3. id [a unique whole numeric ID]
  4. type [ replicated | erasure ]
  5. min_size <min-size>
  6. max_size <max-size>
  7. step take <bucket-name> [class <device-class>]
  8. step [choose|chooseleaf] [firstn|indep] <N> type <bucket-type>
  9. step emit
  10. }

id

  • Description
  • A unique whole number for identifying the rule.

  • Purpose

  • A component of the rule mask.

  • Type

  • Integer

  • Required

  • Yes

  • Default

  • 0

type

  • Description
  • Describes a rule for either a storage drive (replicated)or a RAID.

  • Purpose

  • A component of the rule mask.

  • Type

  • String

  • Required

  • Yes

  • Default

  • replicated

  • Valid Values

  • Currently only replicated and erasure

min_size

  • Description
  • If a pool makes fewer replicas than this number, CRUSH willNOT select this rule.

  • Type

  • Integer

  • Purpose

  • A component of the rule mask.

  • Required

  • Yes

  • Default

  • 1

max_size

  • Description
  • If a pool makes more replicas than this number, CRUSH willNOT select this rule.

  • Type

  • Integer

  • Purpose

  • A component of the rule mask.

  • Required

  • Yes

  • Default

  • 10

step take <bucket-name> [class <device-class>]

  • Description
  • Takes a bucket name, and begins iterating down the tree.If the device-class is specified, it must matcha class previously used when defining a device. Alldevices that do not belong to the class are excluded.

  • Purpose

  • A component of the rule.

  • Required

  • Yes

  • Example

  • step take data

step choose firstn {num} type {bucket-type}

  • Description
  • Selects the number of buckets of the given type from within thecurrent bucket. The number is usually the number of replicas inthe pool (i.e., pool size).

    • If {num} == 0, choose pool-num-replicas buckets (all available).

    • If {num} > 0 && < pool-num-replicas, choose that many buckets.

    • If {num} < 0, it means pool-num-replicas - {num}.

  • Purpose

  • A component of the rule.

  • Prerequisite

  • Follows step take or step choose.

  • Example

  • step choose firstn 1 type row

step chooseleaf firstn {num} type {bucket-type}

  • Description
  • Selects a set of buckets of {bucket-type} and chooses a leafnode (that is, an OSD) from the subtree of each bucket in the set of buckets.The number of buckets in the set is usually the number of replicas inthe pool (i.e., pool size).

    • If {num} == 0, choose pool-num-replicas buckets (all available).

    • If {num} > 0 && < pool-num-replicas, choose that many buckets.

    • If {num} < 0, it means pool-num-replicas - {num}.

  • Purpose

  • A component of the rule. Usage removes the need to select a device using two steps.

  • Prerequisite

  • Follows step take or step choose.

  • Example

  • step chooseleaf firstn 0 type row

step emit

  • Description
  • Outputs the current value and empties the stack. Typically usedat the end of a rule, but may also be used to pick from differenttrees in the same rule.

  • Purpose

  • A component of the rule.

  • Prerequisite

  • Follows step choose.

  • Example

  • step emit

Important

A given CRUSH rule may be assigned to multiple pools, but itis not possible for a single pool to have multiple CRUSH rules.

firstn versus indep

  • Description
  • Controls the replacement strategy CRUSH uses when items (OSDs)are marked down in the CRUSH map. If this rule is to be used withreplicated pools it should be firstn and if it’s forerasure-coded pools it should be indep.

The reason has to do with how they behave when apreviously-selected device fails. Let’s say you have a PG storedon OSDs 1, 2, 3, 4, 5. Then 3 goes down.

With the “firstn” mode, CRUSH simply adjusts its calculation toselect 1 and 2, then selects 3 but discovers it’s down, so itretries and selects 4 and 5, and then goes on to select a newOSD 6. So the final CRUSH mapping change is1, 2, 3, 4, 5 -> 1, 2, 4, 5, 6.

But if you’re storing an EC pool, that means you just changed thedata mapped to OSDs 4, 5, and 6! So the “indep” mode attempts tonot do that. You can instead expect it, when it selects the failedOSD 3, to try again and pick out 6, for a final transformation of:1, 2, 3, 4, 5 -> 1, 2, 6, 4, 5

Migrating from a legacy SSD rule to device classes

It used to be necessary to manually edit your CRUSH map and maintain aparallel hierarchy for each specialized device type (e.g., SSD) in order towrite rules that apply to those devices. Since the Luminous release,the device class feature has enabled this transparently.

However, migrating from an existing, manually customized per-device map tothe new device class rules in the trivial way will cause all data in thesystem to be reshuffled.

The crushtool has a few commands that can transform a legacy ruleand hierarchy so that you can start using the new class-based rules.There are three types of transformations possible:

  • —reclassify-root <root-name> <device-class>

This will take everything in the hierarchy beneath root-name andadjust any rules that reference that root via a take<root-name> to instead take <root-name> class <device-class>.It renumbers the buckets in such a way that the old IDs are insteadused for the specified class’s “shadow tree” so that no datamovement takes place.

For example, imagine you have an existing rule like:

  1. rule replicated_ruleset {
  2. id 0
  3. type replicated
  4. min_size 1
  5. max_size 10
  6. step take default
  7. step chooseleaf firstn 0 type rack
  8. step emit
  9. }

If you reclassify the root default as class hdd, the rule willbecome:

  1. rule replicated_ruleset {
  2. id 0
  3. type replicated
  4. min_size 1
  5. max_size 10
  6. step take default class hdd
  7. step chooseleaf firstn 0 type rack
  8. step emit
  9. }
  • —set-subtree-class <bucket-name> <device-class>

This will mark every device in the subtree rooted at _bucket-name_with the specified device class.

This is normally used in conjunction with the —reclassify-rootoption to ensure that all devices in that root are labeled with thecorrect class. In some situations, however, some of those devices(correctly) have a different class and we do not want to relabelthem. In such cases, one can exclude the —set-subtree-classoption. This means that the remapping process will not be perfect,since the previous rule distributed across devices of multipleclasses but the adjusted rules will only map to devices of thespecified device-class, but that often is an accepted level ofdata movement when the number of outlier devices is small.

  • —reclassify-bucket <match-pattern> <device-class> <default-parent>

This will allow you to merge a parallel type-specific hierarchy with the normal hierarchy. For example, many users have maps like:

  1. host node1 {
  2. id -2 # do not change unnecessarily
  3. # weight 109.152
  4. alg straw2
  5. hash 0 # rjenkins1
  6. item osd.0 weight 9.096
  7. item osd.1 weight 9.096
  8. item osd.2 weight 9.096
  9. item osd.3 weight 9.096
  10. item osd.4 weight 9.096
  11. item osd.5 weight 9.096
  12. ...
  13. }
  14.  
  15. host node1-ssd {
  16. id -10 # do not change unnecessarily
  17. # weight 2.000
  18. alg straw2
  19. hash 0 # rjenkins1
  20. item osd.80 weight 2.000
  21. ...
  22. }
  23.  
  24. root default {
  25. id -1 # do not change unnecessarily
  26. alg straw2
  27. hash 0 # rjenkins1
  28. item node1 weight 110.967
  29. ...
  30. }
  31.  
  32. root ssd {
  33. id -18 # do not change unnecessarily
  34. # weight 16.000
  35. alg straw2
  36. hash 0 # rjenkins1
  37. item node1-ssd weight 2.000
  38. ...
  39. }

This function will reclassify each bucket that matches apattern. The pattern can look like %suffix or prefix%.For example, in the above example, we would use the pattern%-ssd. For each matched bucket, the remaining portion of thename (that matches the % wildcard) specifies the base bucket.All devices in the matched bucket are labeled with the specifieddevice class and then moved to the base bucket. If the base bucketdoes not exist (e.g., node12-ssd exists but node12 doesnot), then it is created and linked underneath the specifieddefault parent bucket. In each case, we are careful to preservethe old bucket IDs for the new shadow buckets to prevent datamovement. Any rules with take steps referencing the oldbuckets are adjusted.

  • —reclassify-bucket <bucket-name> <device-class> <base-bucket>

The same command can also be used without a wildcard to map asingle bucket. For example, in the previous example, we want thessd bucket to be mapped to the default bucket.

The final command to convert the map comprised of the above fragments would be something like:

  1. $ ceph osd getcrushmap -o original
  2. $ crushtool -i original --reclassify \
  3. --set-subtree-class default hdd \
  4. --reclassify-root default hdd \
  5. --reclassify-bucket %-ssd ssd default \
  6. --reclassify-bucket ssd ssd default \
  7. -o adjusted

In order to ensure that the conversion is correct, there is a —compare command that will test a large sample of inputs to the CRUSH map and ensure that the same result comes back out. These inputs are controlled by the same options that apply to the —test command. For the above example,:

  1. $ crushtool -i original --compare adjusted
  2. rule 0 had 0/10240 mismatched mappings (0)
  3. rule 1 had 0/10240 mismatched mappings (0)
  4. maps appear equivalent

If there were difference, you’d see what ratio of inputs are remappedin the parentheses.

If you are satisfied with the adjusted map, you can apply it to the cluster with something like:

  1. ceph osd setcrushmap -i adjusted

Tuning CRUSH, the hard way

If you can ensure that all clients are running recent code, you canadjust the tunables by extracting the CRUSH map, modifying the values,and reinjecting it into the cluster.

  • Extract the latest CRUSH map:
  1. ceph osd getcrushmap -o /tmp/crush
  • Adjust tunables. These values appear to offer the best behaviorfor both large and small clusters we tested with. You will need toadditionally specify the —enable-unsafe-tunables argument tocrushtool for this to work. Please use this option withextreme care.:
  1. crushtool -i /tmp/crush --set-choose-local-tries 0 --set-choose-local-fallback-tries 0 --set-choose-total-tries 50 -o /tmp/crush.new
  • Reinject modified map:
  1. ceph osd setcrushmap -i /tmp/crush.new

Legacy values

For reference, the legacy values for the CRUSH tunables can be setwith:

  1. crushtool -i /tmp/crush --set-choose-local-tries 2 --set-choose-local-fallback-tries 5 --set-choose-total-tries 19 --set-chooseleaf-descend-once 0 --set-chooseleaf-vary-r 0 -o /tmp/crush.legacy

Again, the special —enable-unsafe-tunables option is required.Further, as noted above, be careful running old versions of theceph-osd daemon after reverting to legacy values as the featurebit is not perfectly enforced.