Add/Remove OSDs

Adding and removing Ceph OSD Daemons to your cluster may involve a few moresteps when compared to adding and removing other Ceph daemons. Ceph OSD Daemonswrite data to the disk and to journals. So you need to provide a disk for theOSD and a path to the journal partition (i.e., this is the most commonconfiguration, but you may configure your system to your own needs).

In Ceph v0.60 and later releases, Ceph supports dm-crypt on disk encryption.You may specify the —dmcrypt argument when preparing an OSD to tellceph-deploy that you want to use encryption. You may also specify the—dmcrypt-key-dir argument to specify the location of dm-cryptencryption keys.

You should test various drive configurations to gauge their throughput beforebefore building out a large cluster. See Data Storage for additional details.

List Disks

To list the disks on a node, execute the following command:

  1. ceph-deploy disk list {node-name [node-name]...}

Zap Disks

To zap a disk (delete its partition table) in preparation for use with Ceph,execute the following:

  1. ceph-deploy disk zap {osd-server-name} {disk-name}
  2. ceph-deploy disk zap osdserver1 /dev/sdb /dev/sdc

Important

This will delete all data.

Create OSDs

Once you create a cluster, install Ceph packages, and gather keys, youmay create the OSDs and deploy them to the OSD node(s). If you need toidentify a disk or zap it prior to preparing it for use as an OSD,see List Disks and Zap Disks.

  1. ceph-deploy osd create --data {data-disk} {node-name}

For example:

  1. ceph-deploy osd create --data /dev/ssd osd-server1

For bluestore (the default) the example assumes a disk dedicated to one CephOSD Daemon. Filestore is also supported, in which case a —journal flag inaddition to —filestore needs to be used to define the Journal device onthe remote host.

Note

When running multiple Ceph OSD daemons on a single node, andsharing a partioned journal with each OSD daemon, you should considerthe entire node the minimum failure domain for CRUSH purposes, becauseif the SSD drive fails, all of the Ceph OSD daemons that journal to itwill fail too.

List OSDs

To list the OSDs deployed on a node(s), execute the following command:

  1. ceph-deploy osd list {node-name}

Destroy OSDs

Note

Coming soon. See Remove OSDs for manual procedures.