07-5.集成 Ceph 持久化存储

Ceph 集群情况

机器列表:

  1. 172.27.128.100 mon.m7-common-dfs04
  2. 172.27.128.101 mon.m7-common-dfs03
  3. 172.27.128.102 mon.m7-common-dfs02
  4. 172.27.128.103 mon.m7-common-dfs01

组件部署情况:

  1. Deploym7-common-dfs04
  2. Monm7-common-dfs04 m7-common-dfs02 m7-common-dfs03
  3. Mgrm7-common-dfs04 m7-common-dfs02 m7-common-dfs03
  4. Mdsm7-common-dfs04 m7-common-dfs02 m7-common-dfs03
  5. OSD
  6. 0-2mon.m7-common-dfs04
  7. 3-5mon.m7-common-dfs03
  8. 6-8mon.m7-common-dfs02
  9. 9-11mon.m7-common-dfs01
  10. RGWmon.m7-common-dfs04

安装 Ceph 客户端工具

需要在使用 Ceph 的每个 K8S 节点上安装 Ceph 客户端工具。

创建 yum 源配置文件:

  1. sudo yum install -y epel-release
  2. cat << "EOM" > /etc/yum.repos.d/ceph.repo
  3. [ceph]
  4. name=Ceph packages for $basearch
  5. baseurl=http://download.ceph.com/rpm-luminous/el7/$basearch
  6. enabled=1
  7. gpgcheck=1
  8. type=rpm-md
  9. gpgkey=https://download.ceph.com/keys/release.asc
  10. priority=1
  11. [ceph-noarch]
  12. name=Ceph noarch packages
  13. baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
  14. enabled=1
  15. gpgcheck=1
  16. type=rpm-md
  17. gpgkey=https://download.ceph.com/keys/release.asc
  18. priority=1
  19. [ceph-source]
  20. name=Ceph source packages
  21. baseurl=http://download.ceph.com/rpm-luminous/el7/SRPMS
  22. enabled=1
  23. gpgcheck=1
  24. type=rpm-md
  25. gpgkey=https://download.ceph.com/keys/release.asc
  26. priority=1
  27. EOM
  • 注意:ceph repo 的版本需要与 ceph 集群版本一致,如上面配置的是 luminous 版本源。

安装 Ceph 客户端工具:

  1. yum clean all && yum update
  2. yum install -y ceph-common

安装的命令行工具列表:

  1. $ rpm -ql ceph-common|grep bin
  2. /usr/bin/ceph
  3. /usr/bin/ceph-authtool
  4. /usr/bin/ceph-brag
  5. /usr/bin/ceph-conf
  6. /usr/bin/ceph-dencoder
  7. /usr/bin/ceph-post-file
  8. /usr/bin/ceph-rbdnamer
  9. /usr/bin/ceph-syn
  10. /usr/bin/rados
  11. /usr/bin/rbd

挂载 CephFS

创建本地挂载目录,如 devops:

  1. sudo mkdir -p /etc/ceph /mnt/cephfs/k8s/devops

创建 secret 文件:

  1. sudo scp root@172.27.128.100:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
  2. sudo awk '/key = / {print $3}' /etc/ceph/ceph.client.admin.keyring >/etc/ceph/ceph-admin.secret

挂载 CephFS 根目录 /k8s/:

  1. sudo mount -t ceph 172.27.128.100:6789,172.27.128.101:6789,172.27.128.102:6789:/k8s/ /mnt/cephfs/k8s/devops -o name=admin,secretfile=/etc/ceph/ceph-admin.secret,_netdev,noatime
  • 需要指定多个 MDS 地址,以达到高可用和容错的目的。

在 CephFS 根目录 /k8s/ 中创建集群专用的数据目录,如 devops:

  1. mkdir /k8s/devops

将创建的集群专用 CephFS 目录(/k8s/devops)挂载到本地目录(/mnt/cephfs/k8s/devops):

  1. sudo umount /mnt/cephfs/k8s/devops
  2. sudo mount -t ceph 172.27.128.100:6789,172.27.128.101:6789,172.27.128.102:6789:/k8s/devops /mnt/cephfs/k8s/devops -o name=admin,secretfile=/etc/ceph/ceph-admin.secret,_netdev,noatime

在 /etc/fstab 中添加一行开启自动挂载记录:

  1. 172.27.128.100:6789:/k8s/devops/ /mnt/cephfs/k8s/devops ceph name=admin,secretfile=/etc/ceph/ceph-admin.secret,_netdev,noatime 0 0
  • 必须添加 _netdev 挂载参数,否则机器启动时会卡在挂载 cephfs 阶段。

创建 ceph admin secret

  1. [root@m7-demo-136001 k8s]# cd /opt/k8s
  2. [root@m7-demo-136001 k8s]# scp root@172.27.128.100:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
  3. [root@m7-demo-136001 k8s]# cat /etc/ceph/ceph.client.admin.keyring
  4. [client.admin]
  5. key = AQCYLTdbCyxZBhAAbGfK3T2tczjbhhbR0UWq1w==
  6. [root@m7-demo-136001 k8s]# Secret=$(awk '/key = / {print $3}' /etc/ceph/ceph.client.admin.keyring | base64)
  7. [root@m7-demo-136001 k8s]# cat > ceph-secret-admin.yaml <<EOF
  8. apiVersion: v1
  9. kind: Secret
  10. type: kubernetes.io/rbd
  11. metadata:
  12. name: ceph-secret-admin
  13. data:
  14. key: $Secret
  15. EOF

注意,如果是通过命令行创建 secret-admin,则不需要对key进行base64编码:

$ kubectl create secret generic ceph-secret-admin —from-literal=key=’AQCYLTdbCyxZBhAAbGfK3T2tczjbhhbR0UWq1w==’ —namespace=kube-system —type=kubernetes.io/rbd

  1. [root@m7-demo-136001 k8s]# kubectl create -f ceph-secret-admin.yaml
  2. secret "ceph-secret-admin" created

注意:PVC 只能使用所在命名空间的 rbd secret。所以上面的定义的 ceph-secret-admin 只能供 default 命名空间的 PVC 使用。其它命名空间的 PVC 如果需要用该 StorageClass,则需要在所在命名空间重新定义该 Secret

创建 StorageClass

  1. [root@m7-demo-136001 k8s]# cat >ceph-rbd-storage-class.yaml <<EOF
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: ceph
  6. provisioner: kubernetes.io/rbd
  7. parameters:
  8. monitors: 172.27.128.100:6789,172.27.128.101:6789,172.27.128.102:6789
  9. adminId: admin
  10. adminSecretName: ceph-secret-admin
  11. adminSecretNamespace: "default"
  12. pool: rbd
  13. userId: admin
  14. userSecretName: ceph-secret-admin
  15. EOF
  • 额外的缺省参数: imageFormat、imageFeatures;
  • imageFormat 默认值为 1。如果指定为 2,则需要 v3.11 以上内核才行,支持 Clone 等高级特性;
  • imageFeatures 默认值为空,即不启用任何特性。目前支持 layering 特性;
  1. [root@m7-demo-136001 k8s]# kubectl create -f ceph-rbd-storage-class.yaml
  2. storageclass "ceph" created

查看创建的 StorageClass:

  1. $ kubectl get storageclass
  2. NAME PROVISIONER
  3. ceph kubernetes.io/rbd

通过将 StorageClass 对象的 storageclass.kubernetes.io/is-default-class annotations 设置为 true,可将该 StorageClass 设置为 Default StorageClass:

  1. $ kubectl patch storageclass ceph -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  2. storageclass "ceph" patched
  3. $ kubectl get storageclass
  4. NAME PROVISIONER
  5. ceph (default) kubernetes.io/rbd

创建使用 StorageClass 的 PVC

  1. [root@m7-demo-136001 k8s]# cat >ceph-pvc-storageClass.json <<EOF
  2. {
  3. "kind": "PersistentVolumeClaim",
  4. "apiVersion": "v1",
  5. "metadata": {
  6. "name": "pvc-test-claim"
  7. },
  8. "spec": {
  9. "accessModes": [
  10. "ReadWriteOnce"
  11. ],
  12. "resources": {
  13. "requests": {
  14. "storage": "1Gi"
  15. }
  16. },
  17. "storageClassName": "ceph"
  18. }
  19. }
  20. EOF
  21. [root@m7-demo-136001 k8s]# kubectl create -f ceph-pvc-storageClass.json
  22. persistentvolumeclaim "pvc-test-claim" created
  23. [root@m7-demo-136001 k8s]# kubectl get pv
  24. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  25. pvc-0335828d-90b7-11e8-b43c-0cc47a2af650 1Gi RWO Delete Bound default/pvc-test-claim ceph 20m
  26. [root@m7-demo-136001 k8s]# kubectl describe pv pvc-0335828d-90b7-11e8-b43c-0cc47a2af650
  27. Name: pvc-0335828d-90b7-11e8-b43c-0cc47a2af650
  28. Labels: <none>
  29. Annotations: kubernetes.io/createdby=rbd-dynamic-provisioner
  30. pv.kubernetes.io/bound-by-controller=yes
  31. pv.kubernetes.io/provisioned-by=kubernetes.io/rbd
  32. StorageClass: ceph
  33. Status: Bound
  34. Claim: default/pvc-test-claim
  35. Reclaim Policy: Delete
  36. Access Modes: RWO
  37. Capacity: 1Gi
  38. Message:
  39. Source:
  40. Type: RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
  41. CephMonitors: [172.27.128.100:6789 172.27.128.101:6789 172.27.128.102:6789]
  42. RBDImage: kubernetes-dynamic-pvc-e1726fb7-90ba-11e8-9ba5-0cc47a2af650
  43. FSType:
  44. RBDPool: rbd
  45. RadosUser: admin
  46. Keyring: /etc/ceph/keyring
  47. SecretRef: &{ceph-secret-admin}
  48. ReadOnly: false
  49. Events: <none>

创建使用 PVC 的 Pod

  1. [root@m7-demo-136001 k8s]# cat > ceph-pv-test.yaml <<EOF
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: ceph-pv-test
  6. spec:
  7. containers:
  8. - name: busybox
  9. image: busybox
  10. command: ["sleep", "3600"]
  11. volumeMounts:
  12. - name: ceph-vol
  13. mountPath: /mnt/rbd
  14. readOnly: false
  15. volumes:
  16. - name: ceph-vol
  17. persistentVolumeClaim:
  18. claimName: pvc-test-claim
  19. EOF
  20. [root@m7-demo-136001 k8s]# kubectl create -f ceph-pv-test.yaml
  21. pod "ceph-pv-test" created
  22. [root@m7-demo-136001 k8s]# kubectl describe pods ceph-pv-test
  23. Name: ceph-pv-test
  24. Namespace: default
  25. Node: m7-demo-136001/172.27.136.1
  26. Start Time: Thu, 26 Jul 2018 18:25:48 +0800
  27. Labels: <none>
  28. Annotations: <none>
  29. Status: Running
  30. IP: 172.30.24.43
  31. Containers:
  32. busybox:
  33. Container ID: docker://c6c9f851cefe21a301caf10e4c18c68851c379d83477628560fa3c4006593c5b
  34. Image: busybox
  35. Image ID: docker-pullable://busybox@sha256:d21b79794850b4b15d8d332b451d95351d14c951542942a816eea69c9e04b240
  36. Port: <none>
  37. Command:
  38. sleep
  39. 3600
  40. State: Running
  41. Started: Thu, 26 Jul 2018 18:25:54 +0800
  42. Ready: True
  43. Restart Count: 0
  44. Environment: <none>
  45. Mounts:
  46. /mnt/rbd from ceph-vol (rw)
  47. /var/run/secrets/kubernetes.io/serviceaccount from default-token-4km88 (ro)
  48. Conditions:
  49. Type Status
  50. Initialized True
  51. Ready True
  52. PodScheduled True
  53. Volumes:
  54. ceph-vol:
  55. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
  56. ClaimName: pvc-test-claim
  57. ReadOnly: false
  58. default-token-4km88:
  59. Type: Secret (a volume populated by a Secret)
  60. SecretName: default-token-4km88
  61. Optional: false
  62. QoS Class: BestEffort
  63. Node-Selectors: <none>
  64. Tolerations: <none>
  65. Events:
  66. Type Reason Age From Message
  67. ---- ------ ---- ---- -------
  68. Normal SuccessfulMountVolume 51s kubelet, m7-demo-136001 MountVolume.SetUp succeeded for volume "default-token-4km88"
  69. Normal Scheduled 50s default-scheduler Successfully assigned ceph-pv-test to m7-demo-136001
  70. Normal SuccessfulMountVolume 50s (x2 over 50s) kubelet, m7-demo-136001 MountVolume.SetUp succeeded for volume "pvc-0335828d-90b7-11e8-b43c-0cc47a2af650"
  71. Normal Pulling 49s kubelet, m7-demo-136001 pulling image "busybox"
  72. Normal Pulled 46s kubelet, m7-demo-136001 Successfully pulled image "busybox"
  73. Normal Created 46s kubelet, m7-demo-136001 Created container
  74. Normal Started 46s kubelet, m7-demo-136001 Started container