Persistent Storage Using Container Storage Interface (CSI)

Overview

Container Storage Interface (CSI) allows OKD to consume storage from storage backends that implement the CSI interface as persistent storage.

CSI volumes are currently in Technology Preview and not for production workloads. CSI volumes may change in a future release of OKD.

OKD does not ship with any CSI drivers. It is recommended to use the CSI drivers provided by community or storage vendors.

OKD 3.11 supports version 0.2.0 of the CSI specification.

Architecture

CSI drivers are typically shipped as container images. These containers are not aware of OKD where they run. To use CSI-compatible storage backend in OKD, the cluster administrator must deploy several components that serve as a bridge between OKD and the storage driver.

The following diagram provides a high-level overview about the components running in pods in the OKD cluster.

Architecture of CSI components

It is possible to run multiple CSI drivers for different storage backends. Each driver needs its own external controllers’ deployment and DaemonSet with the driver and CSI registrar.

External CSI Controllers

External CSI Controllers is a deployment that deploys one or more pods with three containers:

  • External CSI attacher container that translates attach and detach calls from OKD to respective ControllerPublish and ControllerUnpublish calls to CSI driver

  • External CSI provisioner container that translates provision and delete calls from OKD to respective CreateVolume and DeleteVolume calls to CSI driver

  • CSI driver container

The CSI attacher and CSI provisioner containers talk to the CSI driver container using UNIX Domain Sockets, ensuring that no CSI communication leaves the pod. The CSI driver is not accessible from outside of the pod.

attach, detach, provision, and delete operations typically require the CSI driver to use credentials to the storage backend. Run the CSI controller pods on infrastructure nodes so the credentials never leak to user processes, even in the event of a catastrophic security breach on a compute node.

The external attacher must also run for CSI drivers that do not support third-party attach/detach operations. The external attacher will not issue any ControllerPublish or ControllerUnpublish operations to the CSI driver. However, it still must run to implement the necessary OKD attachment API.

CSI Driver DaemonSet

Finally, the CSI driver DaemonSet runs a pod on every node that allows OKD to mount storage provided by the CSI driver to the node and use it in user workloads (pods) as persistent volumes (PVs). The pod with the CSI driver installed contains the following containers:

  • CSI driver registrar, which registers the CSI driver into the openshift-node service running on the node. The openshift-node process running on the node then directly connects with the CSI driver using the UNIX Domain Socket available on the node.

  • CSI driver.

The CSI driver deployed on the node should have as few credentials to the storage backend as possible. OKD will only use the node plug-in set of CSI calls such as NodePublish/NodeUnpublish and NodeStage/NodeUnstage (if implemented).

Example Deployment

Since OKD does not ship with any CSI driver installed, this example shows how to deploy a community driver for OpenStack Cinder in OKD.

  1. Create a new project where the CSI components will run and a new service account that will run the components. Explicit node selector is used to run the Daemonset with the CSI driver also on master nodes.

    1. # oc adm new-project csi --node-selector=""
    2. Now using project "csi" on server "https://example.com:8443".
    3. # oc create serviceaccount cinder-csi
    4. serviceaccount "cinder-csi" created
    5. # oc adm policy add-scc-to-user privileged system:serviceaccount:csi:cinder-csi
    6. scc "privileged" added to: ["system:serviceaccount:csi:cinder-csi"]
  2. Apply this YAML file to create the deployment with the external CSI attacher and provisioner and DaemonSet with the CSI driver.

    1. # This YAML file contains all API objects that are necessary to run Cinder CSI
    2. # driver.
    3. #
    4. # In production, this needs to be in separate files, e.g. service account and
    5. # role and role binding needs to be created once.
    6. #
    7. # It serves as an example of how to use external attacher and external provisioner
    8. # images that are shipped with OpenShift Container Platform with a community CSI driver.
    9. kind: ClusterRole
    10. apiVersion: rbac.authorization.k8s.io/v1
    11. metadata:
    12. name: cinder-csi-role
    13. rules:
    14. - apiGroups: [""]
    15. resources: ["persistentvolumes"]
    16. verbs: ["create", "delete", "get", "list", "watch", "update", "patch"]
    17. - apiGroups: [""]
    18. resources: ["events"]
    19. verbs: ["create", "get", "list", "watch", "update", "patch"]
    20. - apiGroups: [""]
    21. resources: ["persistentvolumeclaims"]
    22. verbs: ["get", "list", "watch", "update", "patch"]
    23. - apiGroups: [""]
    24. resources: ["nodes"]
    25. verbs: ["get", "list", "watch", "update", "patch"]
    26. - apiGroups: ["storage.k8s.io"]
    27. resources: ["storageclasses"]
    28. verbs: ["get", "list", "watch"]
    29. - apiGroups: ["storage.k8s.io"]
    30. resources: ["volumeattachments"]
    31. verbs: ["get", "list", "watch", "update", "patch"]
    32. - apiGroups: [""]
    33. resources: ["configmaps"]
    34. verbs: ["get", "list", "watch", "create", "update", "patch"]
    35. ---
    36. kind: ClusterRoleBinding
    37. apiVersion: rbac.authorization.k8s.io/v1
    38. metadata:
    39. name: cinder-csi-role
    40. subjects:
    41. - kind: ServiceAccount
    42. name: cinder-csi
    43. namespace: csi
    44. roleRef:
    45. kind: ClusterRole
    46. name: cinder-csi-role
    47. apiGroup: rbac.authorization.k8s.io
    48. ---
    49. apiVersion: v1
    50. data:
    51. cloud.conf: W0dsb2JhbF0KYXV0aC11cmwgPSBodHRwczovL2V4YW1wbGUuY29tOjEzMDAwL3YyLjAvCnVzZXJuYW1lID0gYWxhZGRpbgpwYXNzd29yZCA9IG9wZW5zZXNhbWUKdGVuYW50LWlkID0gZTBmYTg1YjZhMDY0NDM5NTlkMmQzYjQ5NzE3NGJlZDYKcmVnaW9uID0gcmVnaW9uT25lCg== (1)
    52. kind: Secret
    53. metadata:
    54. creationTimestamp: null
    55. name: cloudconfig
    56. ---
    57. kind: Deployment
    58. apiVersion: apps/v1
    59. metadata:
    60. name: cinder-csi-controller
    61. spec:
    62. replicas: 2
    63. selector:
    64. matchLabels:
    65. app: cinder-csi-controllers
    66. template:
    67. metadata:
    68. labels:
    69. app: cinder-csi-controllers
    70. spec:
    71. serviceAccount: cinder-csi
    72. containers:
    73. - name: csi-attacher
    74. image: registry.redhat.io/openshift3/csi-attacher:v3.11
    75. args:
    76. - "--v=5"
    77. - "--csi-address=$(ADDRESS)"
    78. - "--leader-election"
    79. - "--leader-election-namespace=$(MY_NAMESPACE)"
    80. - "--leader-election-identity=$(MY_NAME)"
    81. env:
    82. - name: MY_NAME
    83. valueFrom:
    84. fieldRef:
    85. fieldPath: metadata.name
    86. - name: MY_NAMESPACE
    87. valueFrom:
    88. fieldRef:
    89. fieldPath: metadata.namespace
    90. - name: ADDRESS
    91. value: /csi/csi.sock
    92. volumeMounts:
    93. - name: socket-dir
    94. mountPath: /csi
    95. - name: csi-provisioner
    96. image: registry.redhat.io/openshift3/csi-provisioner:v3.11
    97. args:
    98. - "--v=5"
    99. - "--provisioner=csi-cinderplugin"
    100. - "--csi-address=$(ADDRESS)"
    101. env:
    102. - name: ADDRESS
    103. value: /csi/csi.sock
    104. volumeMounts:
    105. - name: socket-dir
    106. mountPath: /csi
    107. - name: cinder-driver
    108. image: quay.io/jsafrane/cinder-csi-plugin
    109. command: [ "/bin/cinder-csi-plugin" ]
    110. args:
    111. - "--nodeid=$(NODEID)"
    112. - "--endpoint=unix://$(ADDRESS)"
    113. - "--cloud-config=/etc/cloudconfig/cloud.conf"
    114. env:
    115. - name: NODEID
    116. valueFrom:
    117. fieldRef:
    118. fieldPath: spec.nodeName
    119. - name: ADDRESS
    120. value: /csi/csi.sock
    121. volumeMounts:
    122. - name: socket-dir
    123. mountPath: /csi
    124. - name: cloudconfig
    125. mountPath: /etc/cloudconfig
    126. volumes:
    127. - name: socket-dir
    128. emptyDir:
    129. - name: cloudconfig
    130. secret:
    131. secretName: cloudconfig
    132. ---
    133. kind: DaemonSet
    134. apiVersion: apps/v1
    135. metadata:
    136. name: cinder-csi-ds
    137. spec:
    138. selector:
    139. matchLabels:
    140. app: cinder-csi-driver
    141. template:
    142. metadata:
    143. labels:
    144. app: cinder-csi-driver
    145. spec:
    146. (2)
    147. serviceAccount: cinder-csi
    148. containers:
    149. - name: csi-driver-registrar
    150. image: registry.redhat.io/openshift3/csi-driver-registrar:v3.11
    151. securityContext:
    152. privileged: true
    153. args:
    154. - "--v=5"
    155. - "--csi-address=$(ADDRESS)"
    156. env:
    157. - name: ADDRESS
    158. value: /csi/csi.sock
    159. - name: KUBE_NODE_NAME
    160. valueFrom:
    161. fieldRef:
    162. fieldPath: spec.nodeName
    163. volumeMounts:
    164. - name: socket-dir
    165. mountPath: /csi
    166. - name: cinder-driver
    167. securityContext:
    168. privileged: true
    169. capabilities:
    170. add: ["SYS_ADMIN"]
    171. allowPrivilegeEscalation: true
    172. image: quay.io/jsafrane/cinder-csi-plugin
    173. command: [ "/bin/cinder-csi-plugin" ]
    174. args:
    175. - "--nodeid=$(NODEID)"
    176. - "--endpoint=unix://$(ADDRESS)"
    177. - "--cloud-config=/etc/cloudconfig/cloud.conf"
    178. env:
    179. - name: NODEID
    180. valueFrom:
    181. fieldRef:
    182. fieldPath: spec.nodeName
    183. - name: ADDRESS
    184. value: /csi/csi.sock
    185. volumeMounts:
    186. - name: socket-dir
    187. mountPath: /csi
    188. - name: cloudconfig
    189. mountPath: /etc/cloudconfig
    190. - name: mountpoint-dir
    191. mountPath: /var/lib/origin/openshift.local.volumes/pods/
    192. mountPropagation: "Bidirectional"
    193. - name: cloud-metadata
    194. mountPath: /var/lib/cloud/data/
    195. - name: dev
    196. mountPath: /dev
    197. volumes:
    198. - name: cloud-metadata
    199. hostPath:
    200. path: /var/lib/cloud/data/
    201. - name: socket-dir
    202. hostPath:
    203. path: /var/lib/kubelet/plugins/csi-cinderplugin
    204. type: DirectoryOrCreate
    205. - name: mountpoint-dir
    206. hostPath:
    207. path: /var/lib/origin/openshift.local.volumes/pods/
    208. type: Directory
    209. - name: cloudconfig
    210. secret:
    211. secretName: cloudconfig
    212. - name: dev
    213. hostPath:
    214. path: /dev
    1Replace with cloud.conf for your OpenStack deployment, as described in OpenStack configuration. For example, the Secret can be generated using the oc create secret generic cloudconfig —from-file cloud.conf —dry-run -o yaml.
    2Optionally, add nodeSelector to the CSI driver pod template to configure the nodes on which the CSI driver starts. Only nodes matching the selector run pods that use volumes that are served by the CSI driver. Without nodeSelector, the driver runs on all nodes in the cluster.

Dynamic Provisioning

Dynamic provisioning of persistent storage depends on the capabilities of the CSI driver and underlying storage backend. The provider of the CSI driver should document how to create a StorageClass in OKD and the parameters available for configuration.

As seen in the OpenStack Cinder example, you can deploy this StorageClass to enable dynamic provisioning. The following example creates a new default storage class that ensures that all PVCs that do not require any special storage class are provisioned by the installed CSI driver:

  1. # oc create -f - << EOF
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: cinder
  6. annotations:
  7. storageclass.kubernetes.io/is-default-class: "true"
  8. provisioner: csi-cinderplugin
  9. parameters:
  10. EOF

Usage

Once the CSI driver is deployed and the StorageClass for dynamic provisioning is created, OKD is ready to use CSI. The following example installs a default MySQL template without any changes to the template:

  1. # oc new-app mysql-persistent
  2. --> Deploying template "openshift/mysql-persistent" to project default
  3. ...
  4. # oc get pvc
  5. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  6. mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s