Ceph Storage Quickstart

This guide will walk you through the basic setup of a Ceph cluster and enable you to consume block, object, and file storage from other pods running in your cluster.

Minimum Version

Kubernetes v1.8 or higher is supported by Rook.

Prerequisites

To make sure you have a Kubernetes cluster that is ready for Rook, you can follow these instructions.

If you are using dataDirHostPath to persist rook data on kubernetes hosts, make sure your host has at least 5GB of space available on the specified path.

TL;DR

If you’re feeling lucky, a simple Rook cluster can be created with the following kubectl commands. For the more detailed install, skip to the next section to deploy the Rook operator.

  1. cd cluster/examples/kubernetes/ceph
  2. kubectl create -f operator.yaml
  3. kubectl create -f cluster.yaml

After the cluster is running, you can create block, object, or file storage to be consumed by other applications in your cluster.

Deploy the Rook Operator

The first step is to deploy the Rook system components, which include the Rook agent running on each node in your cluster as well as Rook operator pod.

  1. cd cluster/examples/kubernetes/ceph
  2. kubectl create -f operator.yaml
  3. # verify the rook-ceph-operator, rook-ceph-agent, and rook-discover pods are in the `Running` state before proceeding
  4. kubectl -n rook-ceph-system get pod

You can also deploy the operator with the Rook Helm Chart.

Create a Rook Cluster

Now that the Rook operator, agent, and discover pods are running, we can create the Rook cluster. For the cluster to survive reboots, make sure you set the dataDirHostPath property that is valid for your hosts. For more settings, see the documentation on configuring the cluster.

Save the cluster spec as cluster.yaml:

  1. #################################################################################
  2. # This example first defines some necessary namespace and RBAC security objects.
  3. # The actual Ceph Cluster CRD example can be found at the bottom of this example.
  4. #################################################################################
  5. apiVersion: v1
  6. kind: Namespace
  7. metadata:
  8. name: rook-ceph
  9. ---
  10. apiVersion: v1
  11. kind: ServiceAccount
  12. metadata:
  13. name: rook-ceph-osd
  14. namespace: rook-ceph
  15. ---
  16. apiVersion: v1
  17. kind: ServiceAccount
  18. metadata:
  19. name: rook-ceph-mgr
  20. namespace: rook-ceph
  21. ---
  22. kind: Role
  23. apiVersion: rbac.authorization.k8s.io/v1beta1
  24. metadata:
  25. name: rook-ceph-osd
  26. namespace: rook-ceph
  27. rules:
  28. - apiGroups: [""]
  29. resources: ["configmaps"]
  30. verbs: [ "get", "list", "watch", "create", "update", "delete" ]
  31. ---
  32. # Aspects of ceph-mgr that require access to the system namespace
  33. kind: Role
  34. apiVersion: rbac.authorization.k8s.io/v1beta1
  35. metadata:
  36. name: rook-ceph-mgr-system
  37. namespace: rook-ceph
  38. rules:
  39. - apiGroups:
  40. - ""
  41. resources:
  42. - configmaps
  43. verbs:
  44. - get
  45. - list
  46. - watch
  47. ---
  48. # Aspects of ceph-mgr that operate within the cluster's namespace
  49. kind: Role
  50. apiVersion: rbac.authorization.k8s.io/v1beta1
  51. metadata:
  52. name: rook-ceph-mgr
  53. namespace: rook-ceph
  54. rules:
  55. - apiGroups:
  56. - ""
  57. resources:
  58. - pods
  59. - services
  60. verbs:
  61. - get
  62. - list
  63. - watch
  64. - apiGroups:
  65. - batch
  66. resources:
  67. - jobs
  68. verbs:
  69. - get
  70. - list
  71. - watch
  72. - create
  73. - update
  74. - delete
  75. - apiGroups:
  76. - ceph.rook.io
  77. resources:
  78. - "*"
  79. verbs:
  80. - "*"
  81. ---
  82. # Allow the operator to create resources in this cluster's namespace
  83. kind: RoleBinding
  84. apiVersion: rbac.authorization.k8s.io/v1beta1
  85. metadata:
  86. name: rook-ceph-cluster-mgmt
  87. namespace: rook-ceph
  88. roleRef:
  89. apiGroup: rbac.authorization.k8s.io
  90. kind: ClusterRole
  91. name: rook-ceph-cluster-mgmt
  92. subjects:
  93. - kind: ServiceAccount
  94. name: rook-ceph-system
  95. namespace: rook-ceph-system
  96. ---
  97. # Allow the osd pods in this namespace to work with configmaps
  98. kind: RoleBinding
  99. apiVersion: rbac.authorization.k8s.io/v1beta1
  100. metadata:
  101. name: rook-ceph-osd
  102. namespace: rook-ceph
  103. roleRef:
  104. apiGroup: rbac.authorization.k8s.io
  105. kind: Role
  106. name: rook-ceph-osd
  107. subjects:
  108. - kind: ServiceAccount
  109. name: rook-ceph-osd
  110. namespace: rook-ceph
  111. ---
  112. # Allow the ceph mgr to access the cluster-specific resources necessary for the mgr modules
  113. kind: RoleBinding
  114. apiVersion: rbac.authorization.k8s.io/v1beta1
  115. metadata:
  116. name: rook-ceph-mgr
  117. namespace: rook-ceph
  118. roleRef:
  119. apiGroup: rbac.authorization.k8s.io
  120. kind: Role
  121. name: rook-ceph-mgr
  122. subjects:
  123. - kind: ServiceAccount
  124. name: rook-ceph-mgr
  125. namespace: rook-ceph
  126. ---
  127. # Allow the ceph mgr to access the rook system resources necessary for the mgr modules
  128. kind: RoleBinding
  129. apiVersion: rbac.authorization.k8s.io/v1beta1
  130. metadata:
  131. name: rook-ceph-mgr-system
  132. namespace: rook-ceph-system
  133. roleRef:
  134. apiGroup: rbac.authorization.k8s.io
  135. kind: Role
  136. name: rook-ceph-mgr-system
  137. subjects:
  138. - kind: ServiceAccount
  139. name: rook-ceph-mgr
  140. namespace: rook-ceph
  141. ---
  142. # Allow the ceph mgr to access cluster-wide resources necessary for the mgr modules
  143. kind: RoleBinding
  144. apiVersion: rbac.authorization.k8s.io/v1beta1
  145. metadata:
  146. name: rook-ceph-mgr-cluster
  147. namespace: rook-ceph
  148. roleRef:
  149. apiGroup: rbac.authorization.k8s.io
  150. kind: ClusterRole
  151. name: rook-ceph-mgr-cluster
  152. subjects:
  153. - kind: ServiceAccount
  154. name: rook-ceph-mgr
  155. namespace: rook-ceph
  156. ---
  157. #################################################################################
  158. # The Ceph Cluster CRD example
  159. #################################################################################
  160. apiVersion: ceph.rook.io/v1
  161. kind: CephCluster
  162. metadata:
  163. name: rook-ceph
  164. namespace: rook-ceph
  165. spec:
  166. cephVersion:
  167. # For the latest ceph images, see https://hub.docker.com/r/ceph/ceph/tags
  168. image: ceph/ceph:v13.2.4-20190109
  169. dataDirHostPath: /var/lib/rook
  170. dashboard:
  171. enabled: true
  172. mon:
  173. count: 3
  174. allowMultiplePerNode: true
  175. storage:
  176. useAllNodes: true
  177. useAllDevices: false
  178. config:
  179. databaseSizeMB: "1024"
  180. journalSizeMB: "1024"

Create the cluster:

  1. kubectl create -f cluster.yaml

Use kubectl to list pods in the rook namespace. You should be able to see the following pods once they are all running. The number of osd pods will depend on the number of nodes in the cluster and the number of devices and directories configured.

  1. $ kubectl -n rook-ceph get pod
  2. NAME READY STATUS RESTARTS AGE
  3. rook-ceph-mgr-a-9c44495df-ln9sq 1/1 Running 0 1m
  4. rook-ceph-mon-a-69fb9c78cd-58szd 1/1 Running 0 2m
  5. rook-ceph-mon-b-cf4ddc49c-c756f 1/1 Running 0 2m
  6. rook-ceph-mon-c-5b467747f4-8cbmv 1/1 Running 0 2m
  7. rook-ceph-osd-0-f6549956d-6z294 1/1 Running 0 1m
  8. rook-ceph-osd-1-5b96b56684-r7zsp 1/1 Running 0 1m
  9. rook-ceph-osd-prepare-mynode-ftt57 0/1 Completed 0 1m

Storage

For a walkthrough of the three types of storage exposed by Rook, see the guides for:

  • Block: Create block storage to be consumed by a pod
  • Object: Create an object store that is accessible inside or outside the Kubernetes cluster
  • Shared File System: Create a file system to be shared across multiple pods

Ceph Dashboard

Ceph has a dashboard in which you can view the status of your cluster. Please see the dashboard guide for more details.

Tools

We have created a toolbox container that contains the full suite of Ceph clients for debugging and troubleshooting your Rook cluster. Please see the toolbox readme for setup and usage information. Also see our advanced configuration document for helpful maintenance and tuning examples.

Monitoring

Each Rook cluster has some built in metrics collectors/exporters for monitoring with Prometheus. To learn how to set up monitoring for your Rook cluster, you can follow the steps in the monitoring guide.

Teardown

When you are done with the test cluster, see these instructions to clean up the cluster.