Storage Cluster Quick Start

If you haven’t completed your Preflight Checklist, do that first. ThisQuick Start sets up a Ceph Storage Cluster using ceph-deployon your admin node. Create a three Ceph Node cluster so you canexplore Ceph functionality.

Storage Cluster Quick Start - 图1

As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and threeCeph OSD Daemons. Once the cluster reaches a active + clean state, expand itby adding a fourth Ceph OSD Daemon, and two more Ceph Monitors.For best results, create a directory on your admin node for maintaining theconfiguration files and keys that ceph-deploy generates for your cluster.

  1. mkdir my-cluster
  2. cd my-cluster

The ceph-deploy utility will output files to the current directory. Ensure youare in this directory when executing ceph-deploy.

Important

Do not call ceph-deploy with sudo or run it as rootif you are logged in as a different user, because it will not issue sudocommands needed on the remote host.

Starting over

If at any point you run into trouble and you want to start over, executethe following to purge the Ceph packages, and erase all its data and configuration:

  1. ceph-deploy purge {ceph-node} [{ceph-node}]
  2. ceph-deploy purgedata {ceph-node} [{ceph-node}]
  3. ceph-deploy forgetkeys
  4. rm ceph.*

If you execute purge, you must re-install Ceph. The last rmcommand removes any files that were written out by ceph-deploy locallyduring a previous installation.

Create a Cluster

On your admin node from the directory you created for holding yourconfiguration details, perform the following steps using ceph-deploy.

  • Create the cluster.
  1. ceph-deploy new {initial-monitor-node(s)}

Specify node(s) as hostname, fqdn or hostname:fqdn. For example:

  1. ceph-deploy new node1

Check the output of ceph-deploy with ls and cat in thecurrent directory. You should see a Ceph configuration file(ceph.conf), a monitor secret keyring (ceph.mon.keyring),and a log file for the new cluster. See ceph-deploy new -h foradditional details.

Note for users of Ubuntu 18.04: Python 2 is a prerequisite of Ceph.Install the python-minimal package on Ubuntu 18.04 to providePython 2:

  1. [Ubuntu 18.04] $ sudo apt install python-minimal
  • If you have more than one network interface, add the public networksetting under the [global] section of your Ceph configuration file.See the Network Configuration Reference for details.
  1. public network = {ip-address}/{bits}

For example,:

  1. public network = 10.1.2.0/24

to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.

  • If you are deploying in an IPv6 environment, add the following toceph.conf in the local directory:
  1. echo ms bind ipv6 = true >> ceph.conf
  • Install Ceph packages.:
  1. ceph-deploy install {ceph-node} [...]

For example:

  1. ceph-deploy install node1 node2 node3

The ceph-deploy utility will install Ceph on each node.

  • Deploy the initial monitor(s) and gather the keys:
  1. ceph-deploy mon create-initial

Once you complete the process, your local directory should have the followingkeyrings:

  • ceph.client.admin.keyring

  • ceph.bootstrap-mgr.keyring

  • ceph.bootstrap-osd.keyring

  • ceph.bootstrap-mds.keyring

  • ceph.bootstrap-rgw.keyring

  • ceph.bootstrap-rbd.keyring

  • ceph.bootstrap-rbd-mirror.keyring

Note

If this process fails with a message similar to “Unable tofind /etc/ceph/ceph.client.admin.keyring”, please ensure that theIP listed for the monitor node in ceph.conf is the Public IP, notthe Private IP.

  • Use ceph-deploy to copy the configuration file and admin key toyour admin node and your Ceph Nodes so that you can use the cephCLI without having to specify the monitor address andceph.client.admin.keyring each time you execute a command.
  1. ceph-deploy admin {ceph-node(s)}

For example:

  1. ceph-deploy admin node1 node2 node3
  • Deploy a manager daemon. (Required only for luminous+ builds):
  1. ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
  • Add three OSDs. For the purposes of these instructions, we assume you have anunused disk in each node called /dev/vdb. Be sure that the device is not currently in use and does not contain any important data.
  1. ceph-deploy osd create --data {device} {ceph-node}

For example:

  1. ceph-deploy osd create --data /dev/vdb node1
  2. ceph-deploy osd create --data /dev/vdb node2
  3. ceph-deploy osd create --data /dev/vdb node3

Note

If you are creating an OSD on an LVM volume, the argument to—data must be volume_group/lv_name, rather than the path tothe volume’s block device.

  • Check your cluster’s health.
  1. ssh node1 sudo ceph health

Your cluster should report HEALTH_OK. You can view a more completecluster status with:

  1. ssh node1 sudo ceph -s

Expanding Your Cluster

Once you have a basic cluster up and running, the next step is to expandcluster. Then add a Ceph Monitor and Ceph Manager to node2 and node3to improve reliability and availability.

Storage Cluster Quick Start - 图2

Adding Monitors

A Ceph Storage Cluster requires at least one Ceph Monitor and CephManager to run. For high availability, Ceph Storage Clusters typicallyrun multiple Ceph Monitors so that the failure of a single CephMonitor will not bring down the Ceph Storage Cluster. Ceph uses thePaxos algorithm, which requires a majority of monitors (i.e., greaterthan N/2 where N is the number of monitors) to form a quorum.Odd numbers of monitors tend to be better, although this is not required.

Add two Ceph Monitors to your cluster:

  1. ceph-deploy mon add {ceph-nodes}

For example:

  1. ceph-deploy mon add node2 node3

Once you have added your new Ceph Monitors, Ceph will begin synchronizingthe monitors and form a quorum. You can check the quorum status by executingthe following:

  1. ceph quorum_status --format json-pretty

Tip

When you run Ceph with multiple monitors, you SHOULD install andconfigure NTP on each monitor host. Ensure that themonitors are NTP peers.

Adding Managers

The Ceph Manager daemons operate in an active/standby pattern. Deployingadditional manager daemons ensures that if one daemon or host fails, anotherone can take over without interrupting service.

To deploy additional manager daemons:

  1. ceph-deploy mgr create node2 node3

You should see the standby managers in the output from:

  1. ssh node1 sudo ceph -s

Add an RGW Instance

To use the Ceph Object Gateway component of Ceph, you must deploy aninstance of RGW. Execute the following to create an new instance ofRGW:

  1. ceph-deploy rgw create {gateway-node}

For example:

  1. ceph-deploy rgw create node1

By default, the RGW instance will listen on port 7480. This can bechanged by editing ceph.conf on the node running the RGW as follows:

  1. [client]
  2. rgw frontends = civetweb port=80

To use an IPv6 address, use:

  1. [client]
  2. rgw frontends = civetweb port=[::]:80

Storing/Retrieving Object Data

To store object data in the Ceph Storage Cluster, a Ceph client must:

  • Set an object name

  • Specify a pool

The Ceph Client retrieves the latest cluster map and the CRUSH algorithmcalculates how to map the object to a placement group, and then calculateshow to assign the placement group to a Ceph OSD Daemon dynamically. To find theobject location, all you need is the object name and the pool name. Forexample:

  1. ceph osd map {poolname} {object-name}

As the cluster evolves, the object location may change dynamically. One benefitof Ceph’s dynamic rebalancing is that Ceph relieves you from having to performdata migration or balancing manually.