Configuring local storage for virtual machines

You can configure local storage for your virtual machines by using the hostpath provisioner feature.

About the hostpath provisioner

The hostpath provisioner is a local storage provisioner designed for OKD Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first.

When you install the OKD Virtualization Operator, the hostpath provisioner Operator is automatically installed. To use it, you must:

  • Configure SELinux:

    • If you use Fedora CoreOS (FCOS) 8 workers, you must create a MachineConfig object on each node.

    • Otherwise, apply the SELinux label container_file_t to the persistent volume (PV) backing directory on each node.

  • Create a HostPathProvisioner custom resource.

  • Create a StorageClass object for the hostpath provisioner.

The hostpath provisioner Operator deploys the provisioner as a DaemonSet on each node when you create its custom resource. In the custom resource file, you specify the backing directory for the persistent volumes that the hostpath provisioner creates.

Configuring SELinux for the hostpath provisioner on Fedora CoreOS (FCOS) 8

You must configure SELinux before you create the HostPathProvisioner custom resource. To configure SELinux on Fedora CoreOS (FCOS) 8 workers, you must create a MachineConfig object on each node.

Prerequisites

  • Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates.

    The backing directory must not be located in the filesystem’s root directory because the / partition is read-only on FCOS. For example, you can use /var/<directory_name> but not /<directory_name>.

Procedure

  1. Create the MachineConfig file. For example:

    1. $ touch machineconfig.yaml
  2. Edit the file, ensuring that you include the directory where you want the hostpath provisioner to create PVs. For example:

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfig
    3. metadata:
    4. name: 50-set-selinux-for-hostpath-provisioner
    5. labels:
    6. machineconfiguration.openshift.io/role: worker
    7. spec:
    8. config:
    9. ignition:
    10. version: 3.2.0
    11. systemd:
    12. units:
    13. - contents: |
    14. [Unit]
    15. Description=Set SELinux chcon for hostpath provisioner
    16. Before=kubelet.service
    17. [Service]
    18. ExecStart=/usr/bin/chcon -Rt container_file_t <backing_directory_path> (1)
    19. [Install]
    20. WantedBy=multi-user.target
    21. enabled: true
    22. name: hostpath-provisioner.service
    1Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem’s root directory (/).
  3. Create the MachineConfig object:

    1. $ oc create -f machineconfig.yaml -n <namespace>

Using the hostpath provisioner to enable local storage

To deploy the hostpath provisioner and enable your virtual machines to use local storage, first create a HostPathProvisioner custom resource.

Prerequisites

  • Create a backing directory on each node for the persistent volumes (PVs) that the hostpath provisioner creates.

    The backing directory must not be located in the filesystem’s root directory because the / partition is read-only on Fedora CoreOS (FCOS). For example, you can use /var/<directory_name> but not /<directory_name>.

  • Apply the SELinux context container_file_t to the PV backing directory on each node. For example:

    1. $ sudo chcon -t container_file_t -R <backing_directory_path>

    If you use Fedora CoreOS (FCOS) 8 workers, you must configure SELinux by using a MachineConfig manifest instead.

Procedure

  1. Create the HostPathProvisioner custom resource file. For example:

    1. $ touch hostpathprovisioner_cr.yaml
  2. Edit the file, ensuring that the spec.pathConfig.path value is the directory where you want the hostpath provisioner to create PVs. For example:

    1. apiVersion: hostpathprovisioner.kubevirt.io/v1beta1
    2. kind: HostPathProvisioner
    3. metadata:
    4. name: hostpath-provisioner
    5. spec:
    6. imagePullPolicy: IfNotPresent
    7. pathConfig:
    8. path: "<backing_directory_path>" (1)
    9. useNamingPrefix: false (2)
    10. workload: (3)
    1Specify the backing directory where you want the provisioner to create PVs. This directory must not be located in the filesystem’s root directory (/).
    2Change this value to true if you want to use the name of the persistent volume claim (PVC) that is bound to the created PV as the prefix of the directory name.
    3Optional: You can use the spec.workload field to configure node placement rules for the hostpath provisioner.

    If you did not create the backing directory, the provisioner attempts to create it for you. If you did not apply the container_file_t SELinux context, this can cause Permission denied errors.

  3. Create the custom resource in the openshift-cnv namespace:

    1. $ oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv

Additional resources

Creating a storage class

When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class.

When using OKD Virtualization with OKD Container Storage, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks, RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.

To specify RBD block mode PVCs, use the ‘ocs-storagecluster-ceph-rbd’ storage class and VolumeMode: Block.

You cannot update a StorageClass object’s parameters after you create it.

Procedure

  1. Create a YAML file for defining the storage class. For example:

    1. $ touch storageclass.yaml
  2. Edit the file. For example:

    1. apiVersion: storage.k8s.io/v1
    2. kind: StorageClass
    3. metadata:
    4. name: hostpath-provisioner (1)
    5. provisioner: kubevirt.io/hostpath-provisioner
    6. reclaimPolicy: Delete (2)
    7. volumeBindingMode: WaitForFirstConsumer (3)
    1You can optionally rename the storage class by changing this value.
    2The two possible reclaimPolicy values are Delete and Retain. If you do not specify a value, the storage class defaults to Delete.
    3The volumeBindingMode value determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a PV until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod’s scheduling requirements.

    Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.

    To solve this problem, use the Kubernetes pod scheduler to bind the PVC to a PV on the correct node. By using StorageClass with volumeBindingMode set to WaitForFirstConsumer, the binding and provisioning of the PV is delayed until a Pod is created using the PVC.

  3. Create the StorageClass object:

    1. $ oc create -f storageclass.yaml

Additional resources