Ceph Storage Quickstart

This guide will walk you through the basic setup of a Ceph cluster and enable you to consume block, object, and file storage from other pods running in your cluster.

Minimum Version

Kubernetes v1.7 or higher is supported by Rook.

Prerequisites

To make sure you have a Kubernetes cluster that is ready for Rook, you can follow these instructions.

If you are using dataDirHostPath to persist rook data on kubernetes hosts, make sure your host has at least 5GB of space available on the specified path.

TL;DR

If you’re feeling lucky, a simple Rook cluster can be created with the following kubectl commands. For the more detailed install, skip to the next section to deploy the Rook operator.

  1. cd cluster/examples/kubernetes/ceph
  2. kubectl create -f operator.yaml
  3. kubectl create -f cluster.yaml

After the cluster is running, you can create block, object, or file storage to be consumed by other applications in your cluster.

Deploy the Rook Operator

The first step is to deploy the Rook system components, which include the Rook agent running on each node in your cluster as well as Rook operator pod.

  1. cd cluster/examples/kubernetes/ceph
  2. kubectl create -f operator.yaml
  3. # verify the rook-ceph-operator, rook-ceph-agent, and rook-discover pods are in the `Running` state before proceeding
  4. kubectl -n rook-ceph-system get pod

You can also deploy the operator with the Rook Helm Chart.


Restart Kubelet

(K8S 1.7.x only)

For versions of Kubernetes prior to 1.8, the Kubelet process on all nodes will require a restart after the Rook operator and Rook agents have been deployed. As part of their initial setup, the Rook agents deploy and configure a Flexvolume plugin in order to integrate with Kubernetes’ volume controller framework. In Kubernetes v1.8+, the dynamic Flexvolume plugin discovery will find and initialize our plugin, but in older versions of Kubernetes a manual restart of the Kubelet will be required.


Create a Rook Cluster

Now that the Rook operator, agent, and discover pods are running, we can create the Rook cluster. For the cluster to survive reboots, make sure you set the dataDirHostPath property. For more settings, see the documentation on configuring the cluster.

Save the cluster spec as cluster.yaml:

  1. #################################################################################
  2. # This example first defines some necessary namespace and RBAC security objects.
  3. # The actual Ceph Cluster CRD example can be found at the bottom of this example.
  4. #################################################################################
  5. apiVersion: v1
  6. kind: Namespace
  7. metadata:
  8. name: rook-ceph
  9. ---
  10. apiVersion: v1
  11. kind: ServiceAccount
  12. metadata:
  13. name: rook-ceph-cluster
  14. namespace: rook-ceph
  15. ---
  16. kind: Role
  17. apiVersion: rbac.authorization.k8s.io/v1beta1
  18. metadata:
  19. name: rook-ceph-cluster
  20. namespace: rook-ceph
  21. rules:
  22. - apiGroups: [""]
  23. resources: ["configmaps"]
  24. verbs: [ "get", "list", "watch", "create", "update", "delete" ]
  25. ---
  26. # Allow the operator to create resources in this cluster's namespace
  27. kind: RoleBinding
  28. apiVersion: rbac.authorization.k8s.io/v1beta1
  29. metadata:
  30. name: rook-ceph-cluster-mgmt
  31. namespace: rook-ceph
  32. roleRef:
  33. apiGroup: rbac.authorization.k8s.io
  34. kind: ClusterRole
  35. name: rook-ceph-cluster-mgmt
  36. subjects:
  37. - kind: ServiceAccount
  38. name: rook-ceph-system
  39. namespace: rook-ceph-system
  40. ---
  41. # Allow the pods in this namespace to work with configmaps
  42. kind: RoleBinding
  43. apiVersion: rbac.authorization.k8s.io/v1beta1
  44. metadata:
  45. name: rook-ceph-cluster
  46. namespace: rook-ceph
  47. roleRef:
  48. apiGroup: rbac.authorization.k8s.io
  49. kind: Role
  50. name: rook-ceph-cluster
  51. subjects:
  52. - kind: ServiceAccount
  53. name: rook-ceph-cluster
  54. namespace: rook-ceph
  55. ---
  56. #################################################################################
  57. # The Ceph Cluster CRD example
  58. #################################################################################
  59. apiVersion: ceph.rook.io/v1beta1
  60. kind: Cluster
  61. metadata:
  62. name: rook-ceph
  63. namespace: rook-ceph
  64. spec:
  65. dataDirHostPath: /var/lib/rook
  66. dashboard:
  67. enabled: true
  68. storage:
  69. useAllNodes: true
  70. useAllDevices: false
  71. config:
  72. databaseSizeMB: "1024"
  73. journalSizeMB: "1024"

Create the cluster:

  1. kubectl create -f cluster.yaml

Use kubectl to list pods in the rook namespace. You should be able to see the following pods once they are all running. The number of osd pods will depend on the number of nodes in the cluster and the number of devices and directories configured.

  1. $ kubectl -n rook-ceph get pod
  2. NAME READY STATUS RESTARTS AGE
  3. rook-ceph-mgr-a-75cc4ccbf4-t8qtx 1/1 Running 0 24m
  4. rook-ceph-mon0-72vx7 1/1 Running 0 25m
  5. rook-ceph-mon1-rrpm6 1/1 Running 0 24m
  6. rook-ceph-mon2-zff9r 1/1 Running 0 24m
  7. rook-ceph-osd-id-0-5fd8cb9747-dvlsb 1/1 Running 0 23m
  8. rook-ceph-osd-id-1-84dc695b48-r5mhf 1/1 Running 0 23m
  9. rook-ceph-osd-id-2-558878cd84-cnp67 1/1 Running 0 23m
  10. rook-ceph-osd-prepare-minikube-wq4f5 0/1 Completed 0 24m

Storage

For a walkthrough of the three types of storage exposed by Rook, see the guides for:

  • Block: Create block storage to be consumed by a pod
  • Object: Create an object store that is accessible inside or outside the Kubernetes cluster
  • Shared File System: Create a file system to be shared across multiple pods

Ceph Dashboard

Ceph has a dashboard in which you can view the status of your cluster. Please see the dashboard guide for more details.

Tools

We have created a toolbox container that contains the full suite of Ceph clients for debugging and troubleshooting your Rook cluster. Please see the toolbox readme for setup and usage information. Also see our advanced configuration document for helpful maintenance and tuning examples.

Monitoring

Each Rook cluster has some built in metrics collectors/exporters for monitoring with Prometheus. To learn how to set up monitoring for your Rook cluster, you can follow the steps in the monitoring guide.

Teardown

When you are done with the test cluster, see these instructions to clean up the cluster.