ceph-deploy – Ceph deployment tool

Synopsis

ceph-deploy**new** [initial-monitor-node(s)]

ceph-deploy**install** [ceph-node] [ceph-node…]

ceph-deploy**mon**create-initial

ceph-deploy**osd**create–datadevice__ceph-node

ceph-deploy**admin** [admin-node][ceph-node…]

ceph-deploy**purgedata** [ceph-node][ceph-node…]

ceph-deploy**forgetkeys**

Description

ceph-deploy is a tool which allows easy and quick deployment of aCeph cluster without involving complex and detailed manual configuration. Ituses ssh to gain access to other Ceph nodes from the admin node, sudo foradministrator privileges on them and the underlying Python scripts automatesthe manual process of Ceph installation on each node from the admin node itself.It can be easily run on an workstation and doesn’t require servers, databases orany other automated tools. With ceph-deploy, it is really easy to setup and take down a cluster. However, it is not a generic deployment tool. It isa specific tool which is designed for those who want to get Ceph up and runningquickly with only the unavoidable initial configuration settings and without theoverhead of installing other tools like Chef, Puppet or Juju. Thosewho want to customize security settings, partitions or directory locations andwant to set up a cluster following detailed manual steps, should use other toolsi.e, Chef, Puppet, Juju or Crowbar.

With ceph-deploy, you can install Ceph packages on remote nodes,create a cluster, add monitors, gather/forget keys, add OSDs and metadataservers, configure admin hosts or take down the cluster.

Commands

new

Start deploying a new cluster and write a configuration file and keyring for it.It tries to copy ssh keys from admin node to gain passwordless ssh to monitornode(s), validates host IP, creates a cluster with a new initial monitor node ornodes for monitor quorum, a ceph configuration file, a monitor secret keyring anda log file for the new cluster. It populates the newly created Ceph configurationfile with fsid of cluster, hostnames and IP addresses of initial monitormembers under [global] section.

Usage:

  1. ceph-deploy new [MON][MON...]

Here, [MON] is the initial monitor hostname (short hostname i.e, hostname -s).

Other options like —no-ssh-copykey, —fsid,—cluster-network and —public-network can also be used withthis command.

If more than one network interface is used, public network setting has to beadded under [global] section of Ceph configuration file. If the public subnetis given, new command will choose the one IP from the remote host that existswithin the subnet range. Public network can also be added at runtime using—public-network option with the command as mentioned above.

install

Install Ceph packages on remote hosts. As a first step it installsyum-plugin-priorities in admin and other nodes using passwordless ssh and sudoso that Ceph packages from upstream repository get more priority. It then detectsthe platform and distribution for the hosts and installs Ceph normally bydownloading distro compatible packages if adequate repo for Ceph is already added.—release flag is used to get the latest release for installation. Duringdetection of platform and distribution before installation, if it finds thedistro.init to be sysvinit (Fedora, CentOS/RHEL etc), it doesn’t allowinstallation with custom cluster name and uses the default name ceph for thecluster.

If the user explicitly specifies a custom repo url with —repo-url forinstallation, anything detected from the configuration will be overridden andthe custom repository location will be used for installation of Ceph packages.If required, valid custom repositories are also detected and installed. In caseof installation from a custom repo a boolean is used to determine the logicneeded to proceed with a custom repo installation. A custom repo install helperis used that goes through config checks to retrieve repos (and any extra reposdefined) and installs them. cd_conf is the object built from argparsethat holds the flags and information needed to determine what metadata from theconfiguration is to be used.

A user can also opt to install only the repository without installing Ceph andits dependencies by using —repo option.

Usage:

  1. ceph-deploy install [HOST][HOST...]

Here, [HOST] is/are the host node(s) where Ceph is to be installed.

An option —release is used to install a release known as CODENAME(default: firefly).

Other options like —testing, —dev, —adjust-repos,—no-adjust-repos, —repo, —local-mirror,—repo-url and —gpg-url can also be used with this command.

mds

Deploy Ceph mds on remote hosts. A metadata server is needed to use CephFS andthe mds command is used to create one on the desired host node. It uses thesubcommand create to do so. create first gets the hostname and distroinformation of the desired mds host. It then tries to read the bootstrap-mdskey for the cluster and deploy it in the desired host. The key generally has aformat of {cluster}.bootstrap-mds.keyring. If it doesn’t finds a keyring,it runs gatherkeys to get the keyring. It then creates a mds on the desiredhost under the path /var/lib/ceph/mds/ in /var/lib/ceph/mds/{cluster}-{name}format and a bootstrap keyring under /var/lib/ceph/bootstrap-mds/ in/var/lib/ceph/bootstrap-mds/{cluster}.keyring format. It then runs appropriatecommands based on distro.init to start the mds.

Usage:

  1. ceph-deploy mds create [HOST[:DAEMON-NAME]] [HOST[:DAEMON-NAME]...]

The [DAEMON-NAME] is optional.

mon

Deploy Ceph monitor on remote hosts. mon makes use of certain subcommandsto deploy Ceph monitors on other nodes.

Subcommand create-initial deploys for monitors defined inmon initial members under [global] section in Ceph configuration file,wait until they form quorum and then gatherkeys, reporting the monitor statusalong the process. If monitors don’t form quorum the command will eventuallytime out.

Usage:

  1. ceph-deploy mon create-initial

Subcommand create is used to deploy Ceph monitors by explicitly specifyingthe hosts which are desired to be made monitors. If no hosts are specified itwill default to use the mon initial members defined under [global]section of Ceph configuration file. create first detects platform and distrofor desired hosts and checks if hostname is compatible for deployment. It thenuses the monitor keyring initially created using new command and deploys themonitor in desired host. If multiple hosts were specified during new commandi.e, if there are multiple hosts in mon initial members and multiple keyringswere created then a concatenated keyring is used for deployment of monitors. Inthis process a keyring parser is used which looks for [entity] sections inmonitor keyrings and returns a list of those sections. A helper is then used tocollect all keyrings into a single blob that will be used to inject it to monitorswith —mkfs on remote nodes. All keyring files are concatenated to bein a directory ending with .keyring. During this process the helper uses listof sections returned by keyring parser to check if an entity is already presentin a keyring and if not, adds it. The concatenated keyring is used for deploymentof monitors to desired multiple hosts.

Usage:

  1. ceph-deploy mon create [HOST] [HOST...]

Here, [HOST] is hostname of desired monitor host(s).

Subcommand add is used to add a monitor to an existing cluster. It firstdetects platform and distro for desired host and checks if hostname is compatiblefor deployment. It then uses the monitor keyring, ensures configuration for newmonitor host and adds the monitor to the cluster. If the section for the monitorexists and defines a monitor address that will be used, otherwise it will fallback byresolving the hostname to an IP. If —address is used it will overrideall other options. After adding the monitor to the cluster, it gives it some timeto start. It then looks for any monitor errors and checks monitor status. Monitorerrors arise if the monitor is not added in mon initial members, if it doesn’texist in monmap and if neither public_addr nor public_network keyswere defined for monitors. Under such conditions, monitors may not be able toform quorum. Monitor status tells if the monitor is up and running normally. Thestatus is checked by running ceph daemon mon.hostname mon_status on remoteend which provides the output and returns a boolean status of what is going on.False means a monitor that is not fine even if it is up and running, whileTrue means the monitor is up and running correctly.

Usage:

  1. ceph-deploy mon add [HOST]
  2.  
  3. ceph-deploy mon add [HOST] --address [IP]

Here, [HOST] is the hostname and [IP] is the IP address of the desired monitornode. Please note, unlike other mon subcommands, only one node can bespecified at a time.

Subcommand destroy is used to completely remove monitors on remote hosts.It takes hostnames as arguments. It stops the monitor, verifies if ceph-mondaemon really stopped, creates an archive directory mon-remove under/var/lib/ceph/, archives old monitor directory in{cluster}-{hostname}-{stamp} format in it and removes the monitor fromcluster by running ceph remove… command.

Usage:

  1. ceph-deploy mon destroy [HOST] [HOST...]

Here, [HOST] is hostname of monitor that is to be removed.

gatherkeys

Gather authentication keys for provisioning new nodes. It takes hostnames asarguments. It checks for and fetches client.admin keyring, monitor keyringand bootstrap-mds/bootstrap-osd keyring from monitor host. Theseauthentication keys are used when new monitors/OSDs/MDS are added to thecluster.

Usage:

  1. ceph-deploy gatherkeys [HOST] [HOST...]

Here, [HOST] is hostname of the monitor from where keys are to be pulled.

disk

Manage disks on a remote host. It actually triggers the ceph-volume utilityand its subcommands to manage disks.

Subcommand list lists disk partitions and Ceph OSDs.

Usage:

  1. ceph-deploy disk list HOST

Subcommand zap zaps/erases/destroys a device’s partition table andcontents. It actually uses ceph-volume lvm zap remotely, alternativelyallowing someone to remove the Ceph metadata from the logical volume.

osd

Manage OSDs by preparing data disk on remote host. osd makes use of certainsubcommands for managing OSDs.

Subcommand create prepares a device for Ceph OSD. It first checks againstmultiple OSDs getting created and warns about the possibility of more than therecommended which would cause issues with max allowed PIDs in a system. It thenreads the bootstrap-osd key for the cluster or writes the bootstrap key if notfound.It then uses ceph-volume utility’s lvm create subcommand toprepare the disk, (and journal if using filestore) and deploy the OSD on the desired host.Once prepared, it gives some time to the OSD to start and checks for anypossible errors and if found, reports to the user.

Bluestore Usage:

  1. ceph-deploy osd create --data DISK HOST

Filestore Usage:

  1. ceph-deploy osd create --data DISK --journal JOURNAL HOST

Note

For other flags available, please see the man page or the –help menuon ceph-deploy osd create

Subcommand list lists devices associated to Ceph as part of an OSD.It uses the ceph-volume lvm list output that has a rich output, mappingOSDs to devices and other interesting information about the OSD setup.

Usage:

  1. ceph-deploy osd list HOST

admin

Push configuration and client.admin key to a remote host. It takesthe {cluster}.client.admin.keyring from admin node and writes it under/etc/ceph directory of desired node.

Usage:

  1. ceph-deploy admin [HOST] [HOST...]

Here, [HOST] is desired host to be configured for Ceph administration.

config

Push/pull configuration file to/from a remote host. It uses push subcommandto takes the configuration file from admin host and write it to remote host under/etc/ceph directory. It uses pull subcommand to do the opposite i.e, pullthe configuration file under /etc/ceph directory of remote host to admin node.

Usage:

  1. ceph-deploy config push [HOST] [HOST...]
  2.  
  3. ceph-deploy config pull [HOST] [HOST...]

Here, [HOST] is the hostname of the node where config file will be pushed to orpulled from.

uninstall

Remove Ceph packages from remote hosts. It detects the platform and distro ofselected host and uninstalls Ceph packages from it. However, some dependencieslike librbd1 and librados2 will not be removed because they can causeissues with qemu-kvm.

Usage:

  1. ceph-deploy uninstall [HOST] [HOST...]

Here, [HOST] is hostname of the node from where Ceph will be uninstalled.

purge

Remove Ceph packages from remote hosts and purge all data. It detects theplatform and distro of selected host, uninstalls Ceph packages and purges alldata. However, some dependencies like librbd1 and librados2 will not beremoved because they can cause issues with qemu-kvm.

Usage:

  1. ceph-deploy purge [HOST] [HOST...]

Here, [HOST] is hostname of the node from where Ceph will be purged.

purgedata

Purge (delete, destroy, discard, shred) any Ceph data from /var/lib/ceph.Once it detects the platform and distro of desired host, it first checks if Cephis still installed on the selected host and if installed, it won’t purge datafrom it. If Ceph is already uninstalled from the host, it tries to remove thecontents of /var/lib/ceph. If it fails then probably OSDs are still mountedand needs to be unmounted to continue. It unmount the OSDs and tries to removethe contents of /var/lib/ceph again and checks for errors. It also removescontents of /etc/ceph. Once all steps are successfully completed, all theCeph data from the selected host are removed.

Usage:

  1. ceph-deploy purgedata [HOST] [HOST...]

Here, [HOST] is hostname of the node from where Ceph data will be purged.

forgetkeys

Remove authentication keys from the local directory. It removes all theauthentication keys i.e, monitor keyring, client.admin keyring, bootstrap-osdand bootstrap-mds keyring from the node.

Usage:

  1. ceph-deploy forgetkeys

pkg

Manage packages on remote hosts. It is used for installing or removing packagesfrom remote hosts. The package names for installation or removal are to bespecified after the command. Two options —install and—remove are used for this purpose.

Usage:

  1. ceph-deploy pkg --install [PKGs] [HOST] [HOST...]
  2.  
  3. ceph-deploy pkg --remove [PKGs] [HOST] [HOST...]

Here, [PKGs] is comma-separated package names and [HOST] is hostname of theremote node where packages are to be installed or removed from.

Options

  • —address
  • IP address of the host node to be added to the cluster.
  • —adjust-repos
  • Install packages modifying source repos.
  • —ceph-conf
  • Use (or reuse) a given ceph.conf file.
  • —cluster
  • Name of the cluster.
  • —dev
  • Install a bleeding edge built from Git branch or tag (default: master).
  • —cluster-network
  • Specify the (internal) cluster network.
  • —dmcrypt
  • Encrypt [data-path] and/or journal devices with dm-crypt.
  • —dmcrypt-key-dir
  • Directory where dm-crypt keys are stored.
  • —install
  • Comma-separated package(s) to install on remote hosts.
  • —fs-type
  • Filesystem to use to format disk (xfs, btrfs or ext4). Note that support for btrfs and ext4 is no longer tested or recommended; please use xfs.
  • —fsid
  • Provide an alternate FSID for ceph.conf generation.
  • —gpg-url
  • Specify a GPG key url to be used with custom repos (defaults to ceph.com).
  • —keyrings
  • Concatenate multiple keyrings to be seeded on new monitors.
  • —local-mirror
  • Fetch packages and push them to hosts for a local repo mirror.
  • —mkfs
  • Inject keys to MONs on remote nodes.
  • —no-adjust-repos
  • Install packages without modifying source repos.
  • —no-ssh-copykey
  • Do not attempt to copy ssh keys.
  • —overwrite-conf
  • Overwrite an existing conf file on remote host (if present).
  • —public-network
  • Specify the public network for a cluster.
  • —remove
  • Comma-separated package(s) to remove from remote hosts.
  • —repo
  • Install repo files only (skips package installation).
  • —repo-url
  • Specify a repo url that mirrors/contains Ceph packages.
  • —testing
  • Install the latest development release.
  • —username
  • The username to connect to the remote host.
  • —version
  • The current installed version of ceph-deploy.
  • —zap-disk
  • Destroy the partition table and content of a disk.

Availability

ceph-deploy is part of Ceph, a massively scalable, open-source, distributed storage system. Please refer tothe documentation at https://ceph.com/ceph-deploy/docs for more information.

See also

ceph-mon(8),ceph-osd(8),ceph-volume(8),ceph-mds(8)