Persistent Storage Using OpenStack Cinder

Overview

You can provision your OKD cluster with persistent storage using OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed.

Before you create persistent volumes (PVs) using Cinder, configured OKD for OpenStack.

The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision OpenStack Cinder volumes dynamically.

Persistent volumes are not bound to a single project or namespace; they can be shared across the OKD cluster. Persistent volume claims, however, are specific to a project or namespace and can be requested by users.

High-availability of storage in the infrastructure is left to the underlying storage provider.

Provisioning Cinder PVs

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OKD. After ensuring that OKD is configured for OpenStack, all that is required for Cinder is a Cinder volume ID and the **PersistentVolume** API.

Creating the Persistent Volume

You must define your PV in an object definition before creating it in OKD:

  1. Save your object definition to a file, for example cinder-pv.yaml:

    1. apiVersion: "v1"
    2. kind: "PersistentVolume"
    3. metadata:
    4. name: "pv0001" (1)
    5. spec:
    6. capacity:
    7. storage: "5Gi" (2)
    8. accessModes:
    9. - "ReadWriteOnce"
    10. cinder: (3)
    11. fsType: "ext3" (4)
    12. volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" (5)
    1The name of the volume that is used by persistent volume claims or pods.
    2The amount of storage allocated to this volume.
    3The volume type, in this case cinder.
    4File system type to mount.
    5The Cinder volume to use.

    Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure.

  2. Create the persistent volume:

    1. # oc create -f cinder-pv.yaml
    2. persistentvolume "pv0001" created
  3. Verify that the persistent volume exists:

    1. # oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. pv0001 <none> 5Gi RWO Available 2s

Users can then request storage using persistent volume claims, which can now utilize your new persistent volume.

Persistent volume claims exist only in the user’s namespace and can be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim from a different namespace causes the pod to fail.

Cinder PV format

Before OKD mounts the volume and passes it to a container, it checks that it contains a file system as specified by the **fsType** parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system.

This allows using unformatted Cinder volumes as persistent volumes, because OKD formats them before the first use.

Cinder volume security

If you use Cinder PVs in your application, configure security for their deployment configurations.

Review the Volume Security information before implementing Cinder volumes.

  1. Create an SCC that uses the appropriate **fsGroup** strategy.

  2. Create a service account and add it to the SCC:

    1. [source,bash]
    2. $ oc create serviceaccount <service_account>
    3. $ oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>
  3. In your application’s deployment configuration, provide the service account name and securityContext:

    1. apiVersion: v1
    2. kind: ReplicationController
    3. metadata:
    4. name: frontend-1
    5. spec:
    6. replicas: 1 (1)
    7. selector: (2)
    8. name: frontend
    9. template: (3)
    10. metadata:
    11. labels: (4)
    12. name: frontend (5)
    13. spec:
    14. containers:
    15. - image: openshift/hello-openshift
    16. name: helloworld
    17. ports:
    18. - containerPort: 8080
    19. protocol: TCP
    20. restartPolicy: Always
    21. serviceAccountName: <service_account> (6)
    22. securityContext:
    23. fsGroup: 7777 (7)
    1The number of copies of the pod to run.
    2The label selector of the pod to run.
    3A template for the pod the controller creates.
    4The labels on the pod must include labels from the label selector.
    5The maximum name length after expanding any parameters is 63 characters.
    6Specify the service account you created.
    7Specify an fsGroup for the pods.