Configuring Local Volumes

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Overview

OKD can be configured to access local volumes for application data.

Local volumes are persistent volumes (PV) that represent locally-mounted file systems. As of OKD 3.10, this also includes raw block devices. A raw device offers a more direct route to the physical device and allows an application more control over the timing of I/O operations to that physical device. This makes raw devices suitable for complex applications such as database management systems that typically do their own caching. Local volumes have a few unique features. Any pod that uses a local volume PV is scheduled on the node where the local volume is mounted.

In addition, local volumes include a provisioner that automatically creates PVs for locally-mounted devices. This provisioner currently scans only pre-configured directories. This provisioner cannot dynamically provision volumes, but this feature might be implemented in a future release.

The local volume provisioner allows using local storage within OKD and supports:

  • Volumes

  • PVs

Local volumes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features support scope, see https://access.redhat.com/support/offerings/techpreview/.

Mounting local volumes

All local volumes must be manually mounted before they can be consumed by OKD as PVs.

To mount local volumes:

  1. Mount all volumes into the /mnt/local-storage/<storage-class-name>/<volume> path. Administrators must create local devices as needed using any method such as disk partition or LVM, create suitable file systems on these devices, and mount these devices using a script or /etc/fstab entries, for example:

    1. # device name # mount point # FS # options # extra
    2. /dev/sdb1 /mnt/local-storage/ssd/disk1 ext4 defaults 1 2
    3. /dev/sdb2 /mnt/local-storage/ssd/disk2 ext4 defaults 1 2
    4. /dev/sdb3 /mnt/local-storage/ssd/disk3 ext4 defaults 1 2
    5. /dev/sdc1 /mnt/local-storage/hdd/disk1 ext4 defaults 1 2
    6. /dev/sdc2 /mnt/local-storage/hdd/disk2 ext4 defaults 1 2
  2. Make all volumes accessible to the processes running within the Docker containers. You can change the labels of mounted file systems to allow this, for example:

    1. ---
    2. $ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
    3. ---

Configuring the local provisioner

OKD depends on an external provisioner to create PVs for local devices and to clean up PVs when they are not in use to enable reuse.

  • The local volume provisioner is different from most provisioners and does not support dynamic provisioning.

  • The local volume provisioner requires administrators to preconfigure the local volumes on each node and mount them under discovery directories. The provisioner then manages the volumes by creating and cleaning up PVs for each volume.

To configure the local provisioner:

  1. Configure the external provisioner using a ConfigMap to relate directories with storage classes. This configuration must be created before the provisioner is deployed, for example:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: local-volume-config
    5. data:
    6. storageClassMap: |
    7. local-ssd: (1)
    8. hostDir: /mnt/local-storage/ssd (2)
    9. mountDir: /mnt/local-storage/ssd (3)
    10. local-hdd:
    11. hostDir: /mnt/local-storage/hdd
    12. mountDir: /mnt/local-storage/hdd
    1Name of the storage class.
    2Path to the directory on the host. It must be a subdirectory of /mnt/local-storage.
    3Path to the directory in the provisioner pod. We recommend using the same directory structure as used on the host and mountDir can be omitted in this case.
  2. (Optional) Create a standalone namespace for the local volume provisioner and its configuration, for example: oc new-project local-storage.

With this configuration, the provisioner creates:

  • One PV with storage class local-ssd for every subdirectory mounted in the /mnt/local-storage/ssd directory

  • One PV with storage class local-hdd for every subdirectory mounted in the /mnt/local-storage/hdd directory

The syntax of the ConfigMap has changed between OKD 3.9 and 3.10. Since this feature is in Technology Preview, the ConfigMap is not automatically converted during the update.

Deploying the local provisioner

Before starting the provisioner, mount all local devices and create a ConfigMap with storage classes and their directories.

To deploy the local provisioner:

  1. Install the local provisioner from the local-storage-provisioner-template.yaml file.

  2. Create a service account that allows running pods as a root user, using hostPath volumes, and using any SELinux context to monitor, manage, and clean local volumes:

    1. $ oc create serviceaccount local-storage-admin
    2. $ oc adm policy add-scc-to-user privileged -z local-storage-admin

    To allow the provisioner pod to delete content on local volumes created by any pod, root privileges and any SELinux context are required. hostPath is required to access the /mnt/local-storage path on the host.

  3. Install the template:

    1. $ oc create -f https://raw.githubusercontent.com/openshift/origin/release-3.10/examples/storage-examples/local-examples/local-storage-provisioner-template.yaml
  4. Instantiate the template by specifying values for the CONFIGMAP, SERVICE_ACCOUNT, NAMESPACE, and PROVISIONER_IMAGE parameters:

    1. $ oc new-app -p CONFIGMAP=local-volume-config \
    2. -p SERVICE_ACCOUNT=local-storage-admin \
    3. -p NAMESPACE=local-storage \
    4. -p PROVISIONER_IMAGE=quay.io/external_storage/local-volume-provisioner:v1.0.1 \
    5. local-storage-provisioner
  5. Add the necessary storage classes:

    1. $ oc create -f ./storage-class-ssd.yaml
    2. $ oc create -f ./storage-class-hdd.yaml

    For example:

    storage-class-ssd.yaml

    1. apiVersion: storage.k8s.io/v1
    2. kind: StorageClass
    3. metadata:
    4. name: local-ssd
    5. provisioner: kubernetes.io/no-provisioner
    6. volumeBindingMode: WaitForFirstConsumer

    storage-class-hdd.yaml

    1. apiVersion: storage.k8s.io/v1
    2. kind: StorageClass
    3. metadata:
    4. name: local-hdd
    5. provisioner: kubernetes.io/no-provisioner
    6. volumeBindingMode: WaitForFirstConsumer

See the local storage provisioner template for other configurable options. This template creates a DaemonSet that runs a pod on every node. The pod watches the directories that are specified in the ConfigMap and automatically creates PVs for them.

The provisioner runs with root permissions because it removes all data from the modified directories when a PV is released.

Adding new devices

Adding a new device is semi-automatic. The provisioner periodically checks for new mounts in configured directories. Administrators must create a new subdirectory, mount a device, and allow pods to use the device by applying the SELinux label, for example:

  1. $ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/

Omitting any of these steps may result in the wrong PV being created.

Configuring raw block devices

It is possible to statically provision raw block devices using the local volume provisioner. This feature is disabled by default and requires additional configuration.

To configure raw block devices:

  1. Enable the BlockVolume feature gate on all masters. Edit or create the master configuration file on all masters (/etc/origin/master/master-config.yaml by default) and add BlockVolume=true under the apiServerArguments and controllerArguments sections:

    1. apiServerArguments:
    2. feature-gates:
    3. - BlockVolume=true
    4. ...
    5. controllerArguments:
    6. feature-gates:
    7. - BlockVolume=true
    8. ...
  2. Enable the feature gate on all nodes by editing the node configuration ConfigMap:

    1. $ oc edit configmap node-config-compute --namespace openshift-node
    2. $ oc edit configmap node-config-master --namespace openshift-node
    3. $ oc edit configmap node-config-infra --namespace openshift-node
  3. Ensure that all ConfigMaps contain BlockVolume=true in the feature gates array of the kubeletArguments, for example:

    node configmap feature-gates setting

    1. kubeletArguments:
    2. feature-gates:
    3. - RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true,BlockVolume=true
  4. Restart the master. The nodes restart automatically after the configuration change. This may take several minutes.

Preparing raw block devices

Before you start the provisioner, link all the raw block devices that pods can use to the /mnt/local-storage/<storage class> directory structure. For example, to make directory /dev/dm-36 available:

  1. Create a directory for the device’s storage class in /mnt/local-storage:

    1. $ mkdir -p /mnt/local-storage/block-devices
  2. Create a symbolic link that points to the device:

    1. $ ln -s /dev/dm-36 dm-uuid-LVM-1234

    To avoid possible name conflicts, use the same name for the symbolic link and the link from the /dev/disk/by-uuid or /dev/disk/by-id directory .

  3. Create or update the ConfigMap that configures the provisioner:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: local-volume-config
    5. data:
    6. storageClassMap: |
    7. block-devices: (1)
    8. hostDir: /mnt/local-storage/block-devices (2)
    9. mountDir: /mnt/local-storage/block-devices (3)
    1Name of the storage class.
    2Path to the directory on the host. It must be a subdirectory of /mnt/local-storage.
    3Path to the directory in the provisioner pod. If you use the directory structure that the host uses, which is recommended, omit the mountDir parameter.
  4. Change the SELinux label of the device and the /mnt/local-storage/:

    1. $ chcon -R unconfined_u:object_r:svirt_sandbox_file_t:s0 /mnt/local-storage/
    2. $ chcon unconfined_u:object_r:svirt_sandbox_file_t:s0 /dev/dm-36
  5. Create a storage class for the raw block devices:

    1. apiVersion: storage.k8s.io/v1
    2. kind: StorageClass
    3. metadata:
    4. name: block-devices
    5. provisioner: kubernetes.io/no-provisioner
    6. volumeBindingMode: WaitForFirstConsumer

The block device /dev/dm-36 is now ready to be used by the provisioner and provisioned as a PV.

Deploying raw block device provisioners

Deploying the provisioner for raw block devices is similar to deploying the provisioner on local volumes. There are two differences:

  1. The provisioner must run in a privileged container.

  2. The provisioner must have access to the /dev file system from the host.

To deploy the provisioner for raw block devices:

  1. Download the template from the local-storage-provisioner-template.yaml file.

  2. Edit the template:

    1. Set the privileged attribute of the securityContext of the container spec to true:

      1. ...
      2. containers:
      3. ...
      4. name: provisioner
      5. ...
      6. securityContext:
      7. privileged: true
      8. ...
    2. Mount the host /dev/ file system to the container using hostPath:

      1. ...
      2. containers:
      3. ...
      4. name: provisioner
      5. ...
      6. volumeMounts:
      7. - mountPath: /dev
      8. name: dev
      9. ...
      10. volumes:
      11. - hostPath:
      12. path: /dev
      13. name: dev
      14. ...
  3. Create the template from the modified YAML file:

    1. $ oc create -f local-storage-provisioner-template.yaml
  4. Start the provisioner:

    1. $ oc new-app -p CONFIGMAP=local-volume-config \
    2. -p SERVICE_ACCOUNT=local-storage-admin \
    3. -p NAMESPACE=local-storage \
    4. -p PROVISIONER_IMAGE=quay.io/external_storage/local-volume-provisioner:v1.0.1 \
    5. local-storage-provisioner

Using raw block device persistent volumes

To use the raw block device in the pod, create a persistent volume claim (PVC) with volumeMode: set to Block and storageClassName set to block-devices, for example:

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: block-pvc
  5. spec:
  6. storageClassName: block-devices
  7. accessModes:
  8. - ReadWriteOnce
  9. volumeMode: Block
  10. resources:
  11. requests:
  12. storage: 1Gi

Pod using the raw block device PVC

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: busybox-test
  5. labels:
  6. name: busybox-test
  7. spec:
  8. restartPolicy: Never
  9. containers:
  10. - resources:
  11. limits :
  12. cpu: 0.5
  13. image: gcr.io/google_containers/busybox
  14. command:
  15. - "/bin/sh"
  16. - "-c"
  17. - "while true; do date; sleep 1; done"
  18. name: busybox
  19. volumeDevices:
  20. - name: vol
  21. devicePath: /dev/xvda
  22. volumes:
  23. - name: vol
  24. persistentVolumeClaim:
  25. claimName: block-pvc

The volume is not mounted in the pod but is exposed as the /dev/xvda raw block device.