Using Ceph RBD for dynamic provisioning

Overview

This topic provides a complete example of using an existing Ceph cluster for OKD persistent storage. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.

Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and how to use Ceph Rados Block Device (RBD) as persistent storage.

  • Run all oc commands on the OKD master host.

  • The OKD all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.

Creating a pool for dynamic volumes

  1. Install the latest ceph-common package:

    1. yum install -y ceph-common

    The ceph-common library must be installed on all schedulable OKD nodes.

  2. From an administrator or MON node, create a new pool for dynamic volumes, for example:

    1. $ ceph osd pool create kube 1024
    2. $ ceph auth get-or-create client.kube mon 'allow r, allow command "osd blacklist"' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

    Using the default pool of RBD is an option, but not recommended.

Using an existing Ceph cluster for dynamic persistent storage

To use an existing Ceph cluster for dynamic persistent storage:

  1. Generate the client.admin base64-encoded key:

    1. $ ceph auth get client.admin

    Ceph secret definition example

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: ceph-secret
    5. namespace: kube-system
    6. data:
    7. key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== (1)
    8. type: kubernetes.io/rbd (2)
    1This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.
    2This value is required for Ceph RBD to work with dynamic provisioning.
  2. Create the Ceph secret for the client.admin:

    1. $ oc create -f ceph-secret.yaml
    2. secret "ceph-secret" created
  3. Verify that the secret was created:

    1. $ oc get secret ceph-secret
    2. NAME TYPE DATA AGE
    3. ceph-secret kubernetes.io/rbd 1 5d
  4. Create the storage class:

    1. $ oc create -f ceph-storageclass.yaml
    2. storageclass "dynamic" created

    Ceph storage class example

    1. apiVersion: storage.k8s.io/v1beta1
    2. kind: StorageClass
    3. metadata:
    4. name: dynamic
    5. annotations:
    6. storageclass.kubernetes.io/is-default-class: "true"
    7. provisioner: kubernetes.io/rbd
    8. parameters:
    9. monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789 (1)
    10. adminId: admin (2)
    11. adminSecretName: ceph-secret (3)
    12. adminSecretNamespace: kube-system (4)
    13. pool: kube (5)
    14. userId: kube (6)
    15. userSecretName: ceph-user-secret (7)
    1A comma-delimited list of IP addresses Ceph monitors. This value is required.
    2The Ceph client ID that is capable of creating images in the pool. The default is admin.
    3The secret name for adminId. This value is required. The secret that you provide must have kubernetes.io/rbd.
    4The namespace for adminSecret. The default is default.
    5The Ceph RBD pool. The default is rbd, but this value is not recommended.
    6The Ceph client ID used to map the Ceph RBD image. The default is the same as the secret name for adminId.
    7The name of the Ceph secret for userId to map the Ceph RBD image. It must exist in the same namespace as the PVCs. Unless you set the Ceph secret as the default in new projects, you must provide this parameter value.
  5. Verify that the storage class was created:

    1. $ oc get storageclasses
    2. NAME TYPE
    3. dynamic (default) kubernetes.io/rbd
  6. Create the PVC object definition:

    PVC object definition example

    1. kind: PersistentVolumeClaim
    2. apiVersion: v1
    3. metadata:
    4. name: ceph-claim-dynamic
    5. spec:
    6. accessModes: (1)
    7. - ReadWriteOnce
    8. resources:
    9. requests:
    10. storage: 2Gi (2)
    1The accessModes do not enforce access rights but instead act as labels to match a PV to a PVC.
    2This claim looks for PVs that offer 2Gi or greater capacity.
  7. Create the PVC:

    1. $ oc create -f ceph-pvc.yaml
    2. persistentvolumeclaim "ceph-claim-dynamic" created
  8. Verify that the PVC was created and bound to the expected PV:

    1. $ oc get pvc
    2. NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
    3. ceph-claim Bound pvc-f548d663-3cac-11e7-9937-0024e8650c7a 2Gi RWO 1m
  9. Create the pod object definition:

    Pod object definition example

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: ceph-pod1 (1)
    5. spec:
    6. containers:
    7. - name: ceph-busybox
    8. image: busybox (2)
    9. command: ["sleep", "60000"]
    10. volumeMounts:
    11. - name: ceph-vol1 (3)
    12. mountPath: /usr/share/busybox (4)
    13. readOnly: false
    14. volumes:
    15. - name: ceph-vol1
    16. persistentVolumeClaim:
    17. claimName: ceph-claim-dynamic (5)
    1The name of this pod as displayed by oc get pod.
    2The image run by this pod. In this case, busybox is set to sleep.
    3The name of the volume. This name must be the same in both the containers and volumes sections.
    4The mount path in the container.
    5The PVC that is bound to the Ceph RBD cluster.
  10. Create the pod:

    1. $ oc create -f ceph-pod1.yaml
    2. pod "ceph-pod1" created
  11. Verify that the pod was created:

    1. $ oc get pod
    2. NAME READY STATUS RESTARTS AGE
    3. ceph-pod1 1/1 Running 0 2m

After a minute or so, the pod status changes to Running.

Setting ceph-user-secret as the default for projects

To make persistent storage available to every project, you must modify the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster. See modifying the default project template for more information.

Default project example

  1. ...
  2. apiVersion: v1
  3. kind: Template
  4. metadata:
  5. creationTimestamp: null
  6. name: project-request
  7. objects:
  8. - apiVersion: v1
  9. kind: Project
  10. metadata:
  11. annotations:
  12. openshift.io/description: ${PROJECT_DESCRIPTION}
  13. openshift.io/display-name: ${PROJECT_DISPLAYNAME}
  14. openshift.io/requester: ${PROJECT_REQUESTING_USER}
  15. creationTimestamp: null
  16. name: ${PROJECT_NAME}
  17. spec: {}
  18. status: {}
  19. - apiVersion: v1
  20. kind: Secret
  21. metadata:
  22. name: ceph-user-secret
  23. data:
  24. key: QVFCbEV4OVpmaGJtQ0JBQW55d2Z0NHZtcS96cE42SW1JVUQvekE9PQ== (1)
  25. type:
  26. kubernetes.io/rbd
  27. ...
1Place your Ceph user key here in base64 format.