Running Ceph CSI drivers with Rook

Here is a guide on how to use Rook to deploy ceph-csi drivers on a Kubernetes cluster.

Prerequisites

  1. a Kubernetes v1.13+ is needed in order to support CSI Spec 1.0.
  2. --allow-privileged flag set to true in kubelet and your API server
  3. An up and running Rook instance (see Rook - Ceph quickstart guide)

CSI Drivers Enablement

Create RBAC used by CSI drivers in the same namespace as Rook Ceph Operator

  1. # create rbac. Since rook operator is not permitted to create rbac rules,
  2. # these rules have to be created outside of operator
  3. kubectl apply -f cluster/examples/kubernetes/ceph/csi/rbac/rbd/
  4. kubectl apply -f cluster/examples/kubernetes/ceph/csi/rbac/cephfs/

Start Rook Ceph Operator

  1. kubectl apply -f cluster/examples/kubernetes/ceph/operator-with-csi.yaml

Verify CSI drivers and Operator are up and running

  1. # kubectl get all -n rook-ceph
  2. NAME READY STATUS RESTARTS AGE
  3. pod/csi-cephfsplugin-nd5tv 2/2 Running 1 4m5s
  4. pod/csi-cephfsplugin-provisioner-0 2/2 Running 0 4m5s
  5. pod/csi-rbdplugin-provisioner-0 4/4 Running 1 4m5s
  6. pod/csi-rbdplugin-wr78j 2/2 Running 1 4m5s
  7. pod/rook-ceph-agent-bf772 1/1 Running 0 7m57s
  8. pod/rook-ceph-mgr-a-7f86bb4968-wdd4l 1/1 Running 0 5m28s
  9. pod/rook-ceph-mon-a-648b78fc99-jthsz 1/1 Running 0 6m1s
  10. pod/rook-ceph-mon-b-6f55c9b6fc-nlp4r 1/1 Running 0 5m55s
  11. pod/rook-ceph-mon-c-69f4f466d5-4q2jk 1/1 Running 0 5m45s
  12. pod/rook-ceph-operator-7464bd774c-scb5c 1/1 Running 0 4m7s
  13. pod/rook-ceph-osd-0-7bfdf45977-n5tt9 1/1 Running 0 2m8s
  14. pod/rook-ceph-osd-1-88f95577d-27jk4 1/1 Running 0 2m8s
  15. pod/rook-ceph-osd-2-674b4dcd4c-5wzz9 1/1 Running 0 2m8s
  16. pod/rook-ceph-osd-3-58f6467f6b-q5wwf 1/1 Running 0 2m8s
  17. pod/rook-discover-6t644 1/1 Running 0 7m57s
  18. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  19. service/csi-cephfsplugin-provisioner ClusterIP 10.100.46.135 <none> 1234/TCP 4m5s
  20. service/csi-rbdplugin-provisioner ClusterIP 10.110.210.40 <none> 1234/TCP 4m5s
  21. service/rook-ceph-mgr ClusterIP 10.104.191.254 <none> 9283/TCP 5m13s
  22. service/rook-ceph-mgr-dashboard ClusterIP 10.97.152.26 <none> 8443/TCP 5m13s
  23. service/rook-ceph-mon-a ClusterIP 10.108.83.214 <none> 6789/TCP 6m4s
  24. service/rook-ceph-mon-b ClusterIP 10.104.64.44 <none> 6789/TCP 5m56s
  25. service/rook-ceph-mon-c ClusterIP 10.103.170.196 <none> 6789/TCP 5m45s
  26. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
  27. daemonset.apps/csi-cephfsplugin 1 1 1 1 1 <none> 4m5s
  28. daemonset.apps/csi-rbdplugin 1 1 1 1 1 <none> 4m5s
  29. daemonset.apps/rook-ceph-agent 1 1 1 1 1 <none> 7m57s
  30. daemonset.apps/rook-discover 1 1 1 1 1 <none> 7m57s
  31. NAME READY UP-TO-DATE AVAILABLE AGE
  32. deployment.apps/rook-ceph-mgr-a 1/1 1 1 5m28s
  33. deployment.apps/rook-ceph-mon-a 1/1 1 1 6m2s
  34. deployment.apps/rook-ceph-mon-b 1/1 1 1 5m55s
  35. deployment.apps/rook-ceph-mon-c 1/1 1 1 5m45s
  36. deployment.apps/rook-ceph-operator 1/1 1 1 10m
  37. deployment.apps/rook-ceph-osd-0 1/1 1 1 2m8s
  38. deployment.apps/rook-ceph-osd-1 1/1 1 1 2m8s
  39. deployment.apps/rook-ceph-osd-2 1/1 1 1 2m8s
  40. deployment.apps/rook-ceph-osd-3 1/1 1 1 2m8s
  41. NAME DESIRED CURRENT READY AGE
  42. replicaset.apps/rook-ceph-mgr-a-7f86bb4968 1 1 1 5m28s
  43. replicaset.apps/rook-ceph-mon-a-648b78fc99 1 1 1 6m1s
  44. replicaset.apps/rook-ceph-mon-b-6f55c9b6fc 1 1 1 5m55s
  45. replicaset.apps/rook-ceph-mon-c-69f4f466d5 1 1 1 5m45s
  46. replicaset.apps/rook-ceph-operator-6c49994c4f 0 0 0 10m
  47. replicaset.apps/rook-ceph-operator-7464bd774c 1 1 1 4m7s
  48. replicaset.apps/rook-ceph-osd-0-7bfdf45977 1 1 1 2m8s
  49. replicaset.apps/rook-ceph-osd-1-88f95577d 1 1 1 2m8s
  50. replicaset.apps/rook-ceph-osd-2-674b4dcd4c 1 1 1 2m8s
  51. replicaset.apps/rook-ceph-osd-3-58f6467f6b 1 1 1 2m8s
  52. NAME READY AGE
  53. statefulset.apps/csi-cephfsplugin-provisioner 1/1 4m5s
  54. statefulset.apps/csi-rbdplugin-provisioner 1/1 4m5s

Once the plugin is successfully deployed, test it by running the following example.

Test RBD CSI Driver

Create RBD StorageClass

This storageclass expects a pool named rbd in your Ceph cluster. You can create this pool using rook pool CRD.

Please update monitors to reflect the Ceph monitors.

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/rbd/storageclass.yaml

Create RBD Secret

Create a Secret that matches adminid or userid specified in the storageclass.

Find a Ceph operator pod (in the following example, the pod is rook-ceph-operator-7464bd774c-scb5c) and create a Ceph user for that pool called kubernetes:

  1. kubectl exec -ti -n rook-ceph rook-ceph-operator-7464bd774c-scb5c -- bash -c "ceph -c /var/lib/rook/rook-ceph/rook-ceph.config auth get-or-create-key client.kubernetes mon \"allow profile rbd\" osd \"profile rbd pool=rbd\""

Then create a Secret using admin and kubernetes keyrings:

In secret, you need your Ceph admin/user password encoded in base64.

Run ceph auth ls in your rook ceph operator pod, to encode the key of your admin/user run echo -n KEY|base64 and replace BASE64-ENCODED-PASSWORD by your encoded key.

  1. kubectl exec -ti -n rook-ceph rook-ceph-operator-6c49994c4f-pwqcx /bin/sh
  2. sh-4.2# ceph auth ls
  3. installed auth entries:
  4. osd.0
  5. key: AQA3pa1cN/fODBAAc/jIm5IQDClm+dmekSmSlg==
  6. caps: [mgr] allow profile osd
  7. caps: [mon] allow profile osd
  8. caps: [osd] allow *
  9. osd.1
  10. key: AQBXpa1cTjuYNRAAkohlInoYAa6A3odTRDhnAg==
  11. caps: [mgr] allow profile osd
  12. caps: [mon] allow profile osd
  13. caps: [osd] allow *
  14. osd.2
  15. key: AQB4pa1cvJidLRAALZyAtuOwArO8JZfy7Y5pFg==
  16. caps: [mgr] allow profile osd
  17. caps: [mon] allow profile osd
  18. caps: [osd] allow *
  19. osd.3
  20. key: AQCcpa1cFFQRHRAALBYhqO3m0FRA9pxTOFT2eQ==
  21. caps: [mgr] allow profile osd
  22. caps: [mon] allow profile osd
  23. caps: [osd] allow *
  24. client.admin
  25. key: AQD0pK1cqcBDCBAAdXNXfgAambPz5qWpsq0Mmw==
  26. auid: 0
  27. caps: [mds] allow *
  28. caps: [mgr] allow *
  29. caps: [mon] allow *
  30. caps: [osd] allow *
  31. client.bootstrap-mds
  32. key: AQD6pK1crJyZCxAA1UTGwtyFv3YYFcBmhWHyoQ==
  33. caps: [mon] allow profile bootstrap-mds
  34. client.bootstrap-mgr
  35. key: AQD6pK1c2KaZCxAATWi/I3i0/XEesSipy/HeIA==
  36. caps: [mon] allow profile bootstrap-mgr
  37. client.bootstrap-osd
  38. key: AQD6pK1cwa+ZCxAA7XKXRyLQpaHZ+lRXeUk8xQ==
  39. caps: [mon] allow profile bootstrap-osd
  40. client.bootstrap-rbd
  41. key: AQD6pK1cULmZCxAA4++Ch/iRKa52297/rbHP+w==
  42. caps: [mon] allow profile bootstrap-rbd
  43. client.bootstrap-rgw
  44. key: AQD6pK1cbMKZCxAAGKj5HaMoEl41LHqEafcfPA==
  45. caps: [mon] allow profile bootstrap-rgw
  46. mgr.a
  47. key: AQAZpa1chl+DAhAAYyolLBrkht+0sH0HljkFIg==
  48. caps: [mds] allow *
  49. caps: [mon] allow *
  50. caps: [osd] allow *
  51. #encode admin/user key
  52. sh-4.2# echo -n AQD0pK1cqcBDCBAAdXNXfgAambPz5qWpsq0Mmw==|base64
  53. QVFEMHBLMWNxY0JEQ0JBQWRYTlhmZ0FhbWJQejVxV3BzcTBNbXc9PQ==
  54. #or
  55. sh-4.2# ceph auth get-key client.admin|base64
  56. QVFEMHBLMWNxY0JEQ0JBQWRYTlhmZ0FhbWJQejVxV3BzcTBNbXc9PQ==
  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/rbd/secret.yaml

Create RBD PersistentVolumeClaim

Make sure your storageClassName is the name of the StorageClass previously defined in storageclass.yaml

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/rbd/pvc.yaml

Verify RBD PVC has successfully been created

  1. # kubectl get pvc
  2. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  3. rbd-pvc Bound pvc-c20495c0d5de11e8 1Gi RWO csi-rbd 21s

If your PVC status isn’t Bound, check the csi-rbdplugin logs to see what’s preventing the PVC from being up and bound.

Create RBD demo Pod

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/rbd/pod.yaml

When running rbd list block --pool [yourpool] in one of your Ceph pods you should see the created PVC:

  1. # rbd list block --pool rbd
  2. pvc-c20495c0d5de11e8

Additional features

RBD Snapshots

Since this feature is still in alpha stage (k8s 1.12+), make sure to enable VolumeSnapshotDataSource feature gate in your Kubernetes cluster.

create RBD snapshot-class

You need to create the SnapshotClass. The purpose of a SnapshotClass is defined in the kubernetes documentation. In short, as the documentation describes it:

Just like StorageClass provides a way for administrators to describe the “classes” of storage they offer when provisioning a volume, VolumeSnapshotClass provides a way to describe the “classes” of storage when provisioning a volume snapshot.

In snapshotClass, the csi.storage.k8s.io/snapshotter-secret-name parameter should reference the name of the secret created for the rbdplugin. The monitors are a comma separated list of your Ceph monitors and pool to reflect the Ceph pool name.

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/rbd/snapshotclass.yaml

create volumesnapshot

In snapshot, snapshotClassName should be the name of the VolumeSnapshotClass previously created. The source name should be the name of the PVC you created earlier.

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/rbd/snapshot.yaml

Verify RBD Snapshot has successfully been created

  1. # kubectl get volumesnapshotclass
  2. NAME AGE
  3. csi-rbdplugin-snapclass 4s
  4. # kubectl get volumesnapshot
  5. NAME AGE
  6. rbd-pvc-snapshot 6s

In one of your Ceph pod, run rbd snap ls [name-of-your-pvc]. The output should be similar to this:

  1. # rbd snap ls pvc-c20495c0d5de11e8
  2. SNAPID NAME SIZE TIMESTAMP
  3. 4 csi-rbd-pvc-c20495c0d5de11e8-snap-4c0b455b-d5fe-11e8-bebb-525400123456 1024 MB Mon Oct 22 13:28:03 2018

Restore the snapshot to a new PVC

In pvc-restore, dataSource should be the name of the VolumeSnapshot previously created. The kind should be the VolumeSnapshot.

Create a new PVC from the snapshot

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/rbd/pvc-restore.yaml

Verify RBD clone PVC has successfully been created

  1. # kubectl get pvc
  2. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  3. rbd-pvc Bound pvc-84294e34-577a-11e9-b34f-525400581048 1Gi RWO csi-rbd 34m
  4. rbd-pvc-restore Bound pvc-575537bf-577f-11e9-b34f-525400581048 1Gi RWO csi-rbd 8s

RBD resource Cleanup

To clean your cluster of the resources created by this example, run the following:

if you have tested snapshot, delete snapshotclass, snapshot and pvc-restore created to test snapshot feature

  1. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/rbd/pvc-restore.yaml
  2. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/rbd/snapshot.yaml
  3. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/rbd/snapshotclass.yaml
  1. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/rbd/pod.yaml
  2. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/rbd/pvc.yaml
  3. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/rbd/secret.yaml
  4. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/rbd/storageclass.yaml

Test CephFS CSI Driver

Create CephFS StorageClass

This storageclass expect a pool named cephfs_data in your Ceph cluster. You can create this pool using rook file-system CRD.

Please update monitors to reflect the Ceph monitors.

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/cephfs/storageclass.yaml

Create CephFS Secret

Create a Secret that matches provisionVolume type specified in the storageclass.

In secret you need your Ceph admin/user ID and password encoded in base64. Encode admin/user ID in base64 format and replace BASE64-ENCODED-USER.

  1. $echo -n admin|base64
  2. YWRtaW4=

Run ceph auth ls in your rook ceph operator pod, to encode the key of your admin/user run echo -n KEY|base64 and replace BASE64-ENCODED-PASSWORD by your encoded key.

  1. kubectl exec -ti -n rook-ceph rook-ceph-operator-6c49994c4f-pwqcx /bin/sh
  2. sh-4.2# ceph auth ls
  3. installed auth entries:
  4. osd.0
  5. key: AQA3pa1cN/fODBAAc/jIm5IQDClm+dmekSmSlg==
  6. caps: [mgr] allow profile osd
  7. caps: [mon] allow profile osd
  8. caps: [osd] allow *
  9. osd.1
  10. key: AQBXpa1cTjuYNRAAkohlInoYAa6A3odTRDhnAg==
  11. caps: [mgr] allow profile osd
  12. caps: [mon] allow profile osd
  13. caps: [osd] allow *
  14. osd.2
  15. key: AQB4pa1cvJidLRAALZyAtuOwArO8JZfy7Y5pFg==
  16. caps: [mgr] allow profile osd
  17. caps: [mon] allow profile osd
  18. caps: [osd] allow *
  19. osd.3
  20. key: AQCcpa1cFFQRHRAALBYhqO3m0FRA9pxTOFT2eQ==
  21. caps: [mgr] allow profile osd
  22. caps: [mon] allow profile osd
  23. caps: [osd] allow *
  24. client.admin
  25. key: AQD0pK1cqcBDCBAAdXNXfgAambPz5qWpsq0Mmw==
  26. auid: 0
  27. caps: [mds] allow *
  28. caps: [mgr] allow *
  29. caps: [mon] allow *
  30. caps: [osd] allow *
  31. client.bootstrap-mds
  32. key: AQD6pK1crJyZCxAA1UTGwtyFv3YYFcBmhWHyoQ==
  33. caps: [mon] allow profile bootstrap-mds
  34. client.bootstrap-mgr
  35. key: AQD6pK1c2KaZCxAATWi/I3i0/XEesSipy/HeIA==
  36. caps: [mon] allow profile bootstrap-mgr
  37. client.bootstrap-osd
  38. key: AQD6pK1cwa+ZCxAA7XKXRyLQpaHZ+lRXeUk8xQ==
  39. caps: [mon] allow profile bootstrap-osd
  40. client.bootstrap-rbd
  41. key: AQD6pK1cULmZCxAA4++Ch/iRKa52297/rbHP+w==
  42. caps: [mon] allow profile bootstrap-rbd
  43. client.bootstrap-rgw
  44. key: AQD6pK1cbMKZCxAAGKj5HaMoEl41LHqEafcfPA==
  45. caps: [mon] allow profile bootstrap-rgw
  46. mgr.a
  47. key: AQAZpa1chl+DAhAAYyolLBrkht+0sH0HljkFIg==
  48. caps: [mds] allow *
  49. caps: [mon] allow *
  50. caps: [osd] allow *
  51. #encode admin/user key
  52. sh-4.2#echo -n AQD0pK1cqcBDCBAAdXNXfgAambPz5qWpsq0Mmw==|base64
  53. QVFEMHBLMWNxY0JEQ0JBQWRYTlhmZ0FhbWJQejVxV3BzcTBNbXc9PQ==
  54. #or
  55. sh-4.2#ceph auth get-key client.admin|base64
  56. QVFEMHBLMWNxY0JEQ0JBQWRYTlhmZ0FhbWJQejVxV3BzcTBNbXc9PQ==
  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/cephfs/secret.yaml

Create CephFS PersistentVolumeClaim

In pvc, make sure your storageClassName is the name of the StorageClass previously defined in storageclass.yaml

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/cephfs/pvc.yaml

Verify CephFS PVC has successfully been created

  1. # kubectl get pvc
  2. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  3. cephfs-pvc Bound pvc-6bc76846-3a4a-11e9-971d-525400c2d871 1Gi RWO csi-cephfs 25s

If your PVC status isn’t Bound, check the csi-cephfsplugin logs to see what’s preventing the PVC from being up and bound.

Create CephFS demo Pod

  1. kubectl create -f cluster/examples/kubernetes/ceph/csi/example/cephfs/pod.yaml

Once the PVC is attached to the pod, pod creation process will continue

CephFS resource Cleanup

To clean your cluster of the resources created by this example, run the following:

  1. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/cephfs/pod.yaml
  2. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/cephfs/pvc.yaml
  3. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/cephfs/secret.yaml
  4. kubectl delete -f cluster/examples/kubernetes/ceph/csi/example/cephfs/storageclass.yaml