Sharing an NFS mount across two persistent volume claims

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Overview

The following use case describes how a cluster administrator wanting to leverage shared storage for use by two separate containers would configure the solution. This example highlights the use of NFS, but can easily be adapted to other shared storage types, such as GlusterFS. In addition, this example will show configuration of pod security as it relates to shared storage.

Persistent Storage Using NFS provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using NFS as persistent storage. This topic shows and end-to-end example of using an existing NFS cluster and OKD persistent store, and assumes an existing NFS server and exports exist in your OKD infrastructure.

All oc commands are executed on the OKD master host.

Creating the Persistent Volume

Before creating the PV object in OKD, the persistent volume (PV) file is defined:

Example 1. Persistent Volume Object Definition Using NFS

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: nfs-pv (1)
  5. spec:
  6. capacity:
  7. storage: 1Gi (2)
  8. accessModes:
  9. - ReadWriteMany (3)
  10. persistentVolumeReclaimPolicy: Retain (4)
  11. nfs: (5)
  12. path: /opt/nfs (6)
  13. server: nfs.f22 (7)
  14. readOnly: false
1The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
2The amount of storage allocated to this volume.
3accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
4The volume reclaim policy Retain indicates that the volume will be preserved after the pods accessing it terminates.
5This defines the volume type being used, in this case the NFS plug-in.
6This is the NFS mount path.
7This is the NFS server. This can also be specified by IP address.

Save the PV definition to a file, for example nfs-pv.yaml, and create the persistent volume:

  1. # oc create -f nfs-pv.yaml
  2. persistentvolume "nfs-pv" created

Verify that the persistent volume was created:

  1. # oc get pv
  2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
  3. nfs-pv <none> 1Gi RWX Available 37s

Creating the Persistent Volume Claim

A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC. This is the use case we are highlighting in this example.

Example 2. PVC Object Definition

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: nfs-pvc (1)
  5. spec:
  6. accessModes:
  7. - ReadWriteMany (2)
  8. resources:
  9. requests:
  10. storage: 1Gi (3)
1The claim name is referenced by the pod under its volumes section.
2As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
3This claim will look for PVs offering 1Gi or greater capacity.

Save the PVC definition to a file, for example nfs-pvc.yaml, and create the PVC:

  1. # oc create -f nfs-pvc.yaml
  2. persistentvolumeclaim "nfs-pvc" created

Verify that the PVC was created and bound to the expected PV:

  1. # oc get pvc
  2. NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
  3. nfs-pvc <none> Bound nfs-pv 1Gi RWX 24s
  4. (1)
1The claim, nfs-pvc, was bound to the nfs-pv PV.

Ensuring NFS Volume Access

Access is necessary to a node in the NFS server. On this node, examine the NFS export mount:

  1. [root@nfs nfs]# ls -lZ /opt/nfs/
  2. total 8
  3. -rw-r--r--. 1 root 100003 system_u:object_r:usr_t:s0 10 Oct 12 23:27 test2b
  4. (1)
  5. (2)
1the owner has ID 0.
2the group has ID 100003.

In order to access the NFS mount, the container must match the SELinux label, and either run with a UID of 0, or with 100003 in its supplemental groups range. Gain access to the volume by matching the NFS mount’s groups, which will be defined in the pod definition below.

By default, SELinux does not allow writing from a pod to a remote NFS server. To enable writing to NFS volumes with SELinux enforcing on each node, run:

  1. # setsebool -P virt_use_nfs on

Creating the Pod

A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the NFS volume for read-write access:

Example 3. Pod Object Definition

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: hello-openshift-nfs-pod (1)
  5. labels:
  6. name: hello-openshift-nfs-pod
  7. spec:
  8. containers:
  9. - name: hello-openshift-nfs-pod
  10. image: openshift/hello-openshift (2)
  11. ports:
  12. - name: web
  13. containerPort: 80
  14. volumeMounts:
  15. - name: nfsvol (3)
  16. mountPath: /usr/share/nginx/html (4)
  17. securityContext:
  18. supplementalGroups: [100003] (5)
  19. privileged: false
  20. volumes:
  21. - name: nfsvol
  22. persistentVolumeClaim:
  23. claimName: nfs-pvc (6)
1The name of this pod as displayed by oc get pod.
2The image run by this pod.
3The name of the volume. This name must be the same in both the containers and volumes sections.
4The mount path as seen in the container.
5The group ID to be assigned to the container.
6The PVC that was created in the previous step.

Save the pod definition to a file, for example nfs.yaml, and create the pod:

  1. # oc create -f nfs.yaml
  2. pod "hello-openshift-nfs-pod" created

Verify that the pod was created:

  1. # oc get pods
  2. NAME READY STATUS RESTARTS AGE
  3. hello-openshift-nfs-pod 1/1 Running 0 4s

More details are shown in the oc describe pod command:

  1. [root@ose70 nfs]# oc describe pod hello-openshift-nfs-pod
  2. Name: hello-openshift-nfs-pod
  3. Namespace: default (1)
  4. Image(s): fedora/S3
  5. Node: ose70.rh7/192.168.234.148 (2)
  6. Start Time: Mon, 21 Mar 2016 09:59:47 -0400
  7. Labels: name=hello-openshift-nfs-pod
  8. Status: Running
  9. Reason:
  10. Message:
  11. IP: 10.1.0.4
  12. Replication Controllers: <none>
  13. Containers:
  14. hello-openshift-nfs-pod:
  15. Container ID: docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f
  16. Image: fedora/S3
  17. Image ID: docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a
  18. QoS Tier:
  19. cpu: BestEffort
  20. memory: BestEffort
  21. State: Running
  22. Started: Mon, 21 Mar 2016 09:59:49 -0400
  23. Ready: True
  24. Restart Count: 0
  25. Environment Variables:
  26. Conditions:
  27. Type Status
  28. Ready True
  29. Volumes:
  30. nfsvol:
  31. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
  32. ClaimName: nfs-pvc (3)
  33. ReadOnly: false
  34. default-token-a06zb:
  35. Type: Secret (a secret that should populate this volume)
  36. SecretName: default-token-a06zb
  37. Events: (4)
  38. FirstSeen LastSeen Count From SubobjectPath Reason Message
  39. ───────── ──────── ───── ──── ───────────── ────── ───────
  40. 4m 4m 1 {scheduler } Scheduled Successfully assigned hello-openshift-nfs-pod to ose70.rh7
  41. 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.0.4" already present on machine
  42. 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Created Created with docker id 866a37108041
  43. 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Started Started with docker id 866a37108041
  44. 4m 4m 1 {kubelet ose70.rh7} spec.containers{hello-openshift-nfs-pod} Pulled Container image "fedora/S3" already present on machine
  45. 4m 4m 1 {kubelet ose70.rh7} spec.containers{hello-openshift-nfs-pod} Created Created with docker id a3292104d6c2
  46. 4m 4m 1 {kubelet ose70.rh7} spec.containers{hello-openshift-nfs-pod} Started Started with docker id a3292104d6c2
1The project (namespace) name.
2The IP address of the OKD node running the pod.
3The PVC name used by the pod.
4The list of events resulting in the pod being launched and the NFS volume being mounted. The container will not start correctly if the volume cannot mount.

There is more internal information, including the SCC used to authorize the pod, the pod’s user and group IDs, the SELinux label, and more, shown in the oc get pod <name> -o yaml command:

  1. [root@ose70 nfs]# oc get pod hello-openshift-nfs-pod -o yaml
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. annotations:
  6. openshift.io/scc: restricted (1)
  7. creationTimestamp: 2016-03-21T13:59:47Z
  8. labels:
  9. name: hello-openshift-nfs-pod
  10. name: hello-openshift-nfs-pod
  11. namespace: default (2)
  12. resourceVersion: "2814411"
  13. selflink: /api/v1/namespaces/default/pods/hello-openshift-nfs-pod
  14. uid: 2c22d2ea-ef6d-11e5-adc7-000c2900f1e3
  15. spec:
  16. containers:
  17. - image: fedora/S3
  18. imagePullPolicy: IfNotPresent
  19. name: hello-openshift-nfs-pod
  20. ports:
  21. - containerPort: 80
  22. name: web
  23. protocol: TCP
  24. resources: {}
  25. securityContext:
  26. privileged: false
  27. terminationMessagePath: /dev/termination-log
  28. volumeMounts:
  29. - mountPath: /usr/share/S3/html
  30. name: nfsvol
  31. - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
  32. name: default-token-a06zb
  33. readOnly: true
  34. dnsPolicy: ClusterFirst
  35. host: ose70.rh7
  36. imagePullSecrets:
  37. - name: default-dockercfg-xvdew
  38. nodeName: ose70.rh7
  39. restartPolicy: Always
  40. securityContext:
  41. supplementalGroups:
  42. - 100003 (3)
  43. serviceAccount: default
  44. serviceAccountName: default
  45. terminationGracePeriodSeconds: 30
  46. volumes:
  47. - name: nfsvol
  48. persistentVolumeClaim:
  49. claimName: nfs-pvc (4)
  50. - name: default-token-a06zb
  51. secret:
  52. secretName: default-token-a06zb
  53. status:
  54. conditions:
  55. - lastProbeTime: null
  56. lastTransitionTime: 2016-03-21T13:59:49Z
  57. status: "True"
  58. type: Ready
  59. containerStatuses:
  60. - containerID: docker://a3292104d6c28d9cf49f440b2967a0fc5583540fc3b062db598557b93893bc6f
  61. image: fedora/S3
  62. imageID: docker://403d268c640894cbd76d84a1de3995d2549a93af51c8e16e89842e4c3ed6a00a
  63. lastState: {}
  64. name: hello-openshift-nfs-pod
  65. ready: true
  66. restartCount: 0
  67. state:
  68. running:
  69. startedAt: 2016-03-21T13:59:49Z
  70. hostIP: 192.168.234.148
  71. phase: Running
  72. podIP: 10.1.0.4
  73. startTime: 2016-03-21T13:59:47Z
1The SCC used by the pod.
2The project (namespace) name.
3The supplemental group ID for the pod (all containers).
4The PVC name used by the pod.

Creating an Additional Pod to Reference the Same PVC

This pod definition, created in the same namespace, uses a different container. However, we can use the same backing storage by specifying the claim name in the volumes section below:

Example 4. Pod Object Definition

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: busybox-nfs-pod (1)
  5. labels:
  6. name: busybox-nfs-pod
  7. spec:
  8. containers:
  9. - name: busybox-nfs-pod
  10. image: busybox (2)
  11. command: ["sleep", "60000"]
  12. volumeMounts:
  13. - name: nfsvol-2 (3)
  14. mountPath: /usr/share/busybox (4)
  15. readOnly: false
  16. securityContext:
  17. supplementalGroups: [100003] (5)
  18. privileged: false
  19. volumes:
  20. - name: nfsvol-2
  21. persistentVolumeClaim:
  22. claimName: nfs-pvc (6)
1The name of this pod as displayed by oc get pod.
2The image run by this pod.
3The name of the volume. This name must be the same in both the containers and volumes sections.
4The mount path as seen in the container.
5The group ID to be assigned to the container.
6The PVC that was created earlier and is also being used by a different container.

Save the pod definition to a file, for example nfs-2.yaml, and create the pod:

  1. # oc create -f nfs-2.yaml
  2. pod "busybox-nfs-pod" created

Verify that the pod was created:

  1. # oc get pods
  2. NAME READY STATUS RESTARTS AGE
  3. busybox-nfs-pod 1/1 Running 0 3s

More details are shown in the oc describe pod command:

  1. [root@ose70 nfs]# oc describe pod busybox-nfs-pod
  2. Name: busybox-nfs-pod
  3. Namespace: default
  4. Image(s): busybox
  5. Node: ose70.rh7/192.168.234.148
  6. Start Time: Mon, 21 Mar 2016 10:19:46 -0400
  7. Labels: name=busybox-nfs-pod
  8. Status: Running
  9. Reason:
  10. Message:
  11. IP: 10.1.0.5
  12. Replication Controllers: <none>
  13. Containers:
  14. busybox-nfs-pod:
  15. Container ID: docker://346d432e5a4824ebf5a47fceb4247e0568ecc64eadcc160e9bab481aecfb0594
  16. Image: busybox
  17. Image ID: docker://17583c7dd0dae6244203b8029733bdb7d17fccbb2b5d93e2b24cf48b8bfd06e2
  18. QoS Tier:
  19. cpu: BestEffort
  20. memory: BestEffort
  21. State: Running
  22. Started: Mon, 21 Mar 2016 10:19:48 -0400
  23. Ready: True
  24. Restart Count: 0
  25. Environment Variables:
  26. Conditions:
  27. Type Status
  28. Ready True
  29. Volumes:
  30. nfsvol-2:
  31. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
  32. ClaimName: nfs-pvc
  33. ReadOnly: false
  34. default-token-32d2z:
  35. Type: Secret (a secret that should populate this volume)
  36. SecretName: default-token-32d2z
  37. Events:
  38. FirstSeen LastSeen Count From SubobjectPath Reason Message
  39. ───────── ──────── ───── ──── ───────────── ────── ───────
  40. 4m 4m 1 {scheduler } Scheduled Successfully assigned busybox-nfs-pod to ose70.rh7
  41. 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Pulled Container image "openshift3/ose-pod:v3.1.0.4" already present on machine
  42. 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Created Created with docker id 249b7d7519b1
  43. 4m 4m 1 {kubelet ose70.rh7} implicitly required container POD Started Started with docker id 249b7d7519b1
  44. 4m 4m 1 {kubelet ose70.rh7} spec.containers{busybox-nfs-pod} Pulled Container image "busybox" already present on machine
  45. 4m 4m 1 {kubelet ose70.rh7} spec.containers{busybox-nfs-pod} Created Created with docker id 346d432e5a48
  46. 4m 4m 1 {kubelet ose70.rh7} spec.containers{busybox-nfs-pod} Started Started with docker id 346d432e5a48

As you can see, both containers are using the same storage claim that is attached to the same NFS mount on the back end.