Persistent storage using local volumes

OKD can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface.

Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications.

Local volumes can only be used as a statically created persistent volume.

Installing the Local Storage Operator

The Local Storage Operator is not installed in OKD by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster.

Prerequisites

  • Access to the OKD web console or command-line interface (CLI).

Procedure

  1. Create the openshift-local-storage project:

    1. $ oc adm new-project openshift-local-storage
  2. Optional: Allow local storage creation on infrastructure nodes.

    You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring.

    You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes.

    To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command:

    1. $ oc annotate namespace openshift-local-storage openshift.io/node-selector=''
  3. Optional: Allow local storage to run on the management pool of CPUs in single-node deployment.

    Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the managment pool. Perform this step on single-node installations that use management workload partitioning.

    To allow Local Storage Operator to run on the management CPU pool, run following commands:

    1. $ oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'

From the UI

To install the Local Storage Operator from the web console, follow these steps:

  1. Log in to the OKD web console.

  2. Navigate to OperatorsOperatorHub.

  3. Type Local Storage into the filter box to locate the Local Storage Operator.

  4. Click Install.

  5. On the Install Operator page, select A specific namespace on the cluster. Select openshift-local-storage from the drop-down menu.

  6. Adjust the values for Update Channel and Approval Strategy to the values that you want.

  7. Click Install.

Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console.

From the CLI

  1. Install the Local Storage Operator from the CLI.

    1. Run the following command to get the OKD major and minor version. It is required for the channel value in the next step.

      1. $ OC_VERSION=$(oc version -o yaml | grep openshiftVersion | \
      2. grep -o '[0-9]*[.][0-9]*' | head -1)
    2. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml:

      Example openshift-local-storage.yaml

      1. apiVersion: operators.coreos.com/v1
      2. kind: OperatorGroup
      3. metadata:
      4. name: local-operator-group
      5. namespace: openshift-local-storage
      6. spec:
      7. targetNamespaces:
      8. - openshift-local-storage
      9. ---
      10. apiVersion: operators.coreos.com/v1alpha1
      11. kind: Subscription
      12. metadata:
      13. name: local-storage-operator
      14. namespace: openshift-local-storage
      15. spec:
      16. channel: "${OC_VERSION}"
      17. installPlanApproval: Automatic (1)
      18. name: local-storage-operator
      19. source: redhat-operators
      20. sourceNamespace: openshift-marketplace
      1The user approval policy for an install plan.
  2. Create the Local Storage Operator object by entering the following command:

    1. $ oc apply -f openshift-local-storage.yaml

    At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

  3. Verify local storage installation by checking that all pods and the Local Storage Operator have been created:

    1. Check that all the required pods have been created:

      1. $ oc -n openshift-local-storage get pods

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m
    2. Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project:

      1. $ oc get csvs -n openshift-local-storage

      Example output

      1. NAME DISPLAY VERSION REPLACES PHASE
      2. local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded

After all checks have passed, the Local Storage Operator is installed successfully.

Provisioning local volumes by using the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Prerequisites

  • The Local Storage Operator is installed.

  • You have a local disk that meets the following conditions:

    • It is attached to a node.

    • It is not mounted.

    • It does not contain partitions.

Procedure

  1. Create the local volume resource. This resource must define the nodes and paths to the local volumes.

    Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs).

    Example: Filesystem

    1. apiVersion: "local.storage.openshift.io/v1"
    2. kind: "LocalVolume"
    3. metadata:
    4. name: "local-disks"
    5. namespace: "openshift-local-storage" (1)
    6. spec:
    7. nodeSelector: (2)
    8. nodeSelectorTerms:
    9. - matchExpressions:
    10. - key: kubernetes.io/hostname
    11. operator: In
    12. values:
    13. - ip-10-0-140-183
    14. - ip-10-0-158-139
    15. - ip-10-0-164-33
    16. storageClassDevices:
    17. - storageClassName: "local-sc" (3)
    18. volumeMode: Filesystem (4)
    19. fsType: xfs (5)
    20. devicePaths: (6)
    21. - /path/to/device (7)
    1The namespace where the Local Storage Operator is installed.
    2Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
    4The volume mode, either Filesystem or Block, that defines the type of local volumes.

    A raw block volume (volumeMode: Block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    5The file system that is created when the local volume is mounted for the first time.
    6The path containing a list of local storage devices to choose from.
    7Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as /dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.

    If you are running OKD on IBM zSystems with Fedora KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

    Example: Block

    1. apiVersion: "local.storage.openshift.io/v1"
    2. kind: "LocalVolume"
    3. metadata:
    4. name: "local-disks"
    5. namespace: "openshift-local-storage" (1)
    6. spec:
    7. nodeSelector: (2)
    8. nodeSelectorTerms:
    9. - matchExpressions:
    10. - key: kubernetes.io/hostname
    11. operator: In
    12. values:
    13. - ip-10-0-136-143
    14. - ip-10-0-140-255
    15. - ip-10-0-144-180
    16. storageClassDevices:
    17. - storageClassName: "localblock-sc" (3)
    18. volumeMode: Block (4)
    19. devicePaths: (5)
    20. - /path/to/device (6)
    1The namespace where the Local Storage Operator is installed.
    2Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node. If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes.
    3The name of the storage class to use when creating persistent volume objects.
    4The volume mode, either Filesystem or Block, that defines the type of local volumes.
    5The path containing a list of local storage devices to choose from.
    6Replace this value with your actual local disks filepath to the LocalVolume resource by-id, such as dev/disk/by-id/wwn. PVs are created for these local disks when the provisioner is deployed successfully.

    If you are running OKD on IBM zSystems with Fedora KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition.

  2. Create the local volume resource in your OKD cluster. Specify the file you just created:

    1. $ oc create -f <local-volume>.yaml
  3. Verify that the provisioner was created and that the corresponding daemon sets were created:

    1. $ oc get all -n openshift-local-storage

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s
    3. pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s
    4. pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s
    5. pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m
    6. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    7. service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m
    8. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    9. daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s
    10. NAME READY UP-TO-DATE AVAILABLE AGE
    11. deployment.apps/local-storage-operator 1/1 1 1 14m
    12. NAME DESIRED CURRENT READY AGE
    13. replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m

    Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid.

  4. Verify that the persistent volumes were created:

    1. $ oc get pv

    Example output

    1. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    2. local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m
    3. local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m
    4. local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m

Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation.

Provisioning local volumes without the Local Storage Operator

Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource.

Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs.

Prerequisites

  • Local disks are attached to the OKD nodes.

Procedure

  1. Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml, with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes.

    Do not use different storage class names for the same device. Doing so will create multiple PVs.

    example-pv-filesystem.yaml

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: example-pv-filesystem
    5. spec:
    6. capacity:
    7. storage: 100Gi
    8. volumeMode: Filesystem (1)
    9. accessModes:
    10. - ReadWriteOnce
    11. persistentVolumeReclaimPolicy: Delete
    12. storageClassName: local-storage (2)
    13. local:
    14. path: /dev/xvdf (3)
    15. nodeAffinity:
    16. required:
    17. nodeSelectorTerms:
    18. - matchExpressions:
    19. - key: kubernetes.io/hostname
    20. operator: In
    21. values:
    22. - example-node
    1The volume mode, either Filesystem or Block, that defines the type of PVs.
    2The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs.
    3The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode.

    A raw block volume (volumeMode: block) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices.

    example-pv-block.yaml

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: example-pv-block
    5. spec:
    6. capacity:
    7. storage: 100Gi
    8. volumeMode: Block (1)
    9. accessModes:
    10. - ReadWriteOnce
    11. persistentVolumeReclaimPolicy: Delete
    12. storageClassName: local-storage (2)
    13. local:
    14. path: /dev/xvdf (3)
    15. nodeAffinity:
    16. required:
    17. nodeSelectorTerms:
    18. - matchExpressions:
    19. - key: kubernetes.io/hostname
    20. operator: In
    21. values:
    22. - example-node
    1The volume mode, either Filesystem or Block, that defines the type of PVs.
    2The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs.
    3The path containing a list of local storage devices to choose from.
  2. Create the PV resource in your OKD cluster. Specify the file you just created:

    1. $ oc create -f <example-pv>.yaml
  3. Verify that the local PV was created:

    1. $ oc get pv

    Example output

    1. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    2. example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s
    3. example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h
    4. example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h
    5. example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h

Creating the local volume persistent volume claim

Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod.

Prerequisites

  • Persistent volumes have been created using the local volume provisioner.

Procedure

  1. Create the PVC using the corresponding storage class:

    1. kind: PersistentVolumeClaim
    2. apiVersion: v1
    3. metadata:
    4. name: local-pvc-name (1)
    5. spec:
    6. accessModes:
    7. - ReadWriteOnce
    8. volumeMode: Filesystem (2)
    9. resources:
    10. requests:
    11. storage: 100Gi (3)
    12. storageClassName: local-sc (4)
    1Name of the PVC.
    2The type of the PVC. Defaults to Filesystem.
    3The amount of storage available to the PVC.
    4Name of the storage class required by the claim.
  2. Create the PVC in the OKD cluster, specifying the file you just created:

    1. $ oc create -f <local-pvc>.yaml

Attach the local claim

After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource.

Prerequisites

  • A persistent volume claim exists in the same namespace.

Procedure

  1. Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod:

    1. apiVersion: v1
    2. kind: Pod
    3. spec:
    4. ...
    5. containers:
    6. volumeMounts:
    7. - name: local-disks (1)
    8. mountPath: /data (2)
    9. volumes:
    10. - name: localpvc
    11. persistentVolumeClaim:
    12. claimName: local-pvc-name (3)
    1The name of the volume to mount.
    2The path inside the pod where the volume is mounted. Do not mount to the container root, /, or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host.
    3The name of the existing persistent volume claim to use.
  2. Create the resource in the OKD cluster, specifying the file you just created:

    1. $ oc create -f <local-pod>.yaml

Automating discovery and provisioning for local storage devices

The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices.

Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices.

Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node.

Prerequisites

  • You have cluster administrator permissions.

  • You have installed the Local Storage Operator.

  • You have attached local disks to OKD nodes.

  • You have access to the OKD web console and the oc command-line interface (CLI).

Procedure

  1. To enable automatic discovery of local devices from the web console:

    1. In the Administrator perspective, navigate to OperatorsInstalled Operators and click on the Local Volume Discovery tab.

    2. Click Create Local Volume Discovery.

    3. Select either All nodes or Select nodes, depending on whether you want to discover available disks on all or specific nodes.

      Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes.

    4. Click Create.

A local volume discovery instance named auto-discover-devices is displayed.

  1. To display a continuous list of available devices on a node:

    1. Log in to the OKD web console.

    2. Navigate to ComputeNodes.

    3. Click the node name that you want to open. The “Node Details” page is displayed.

    4. Select the Disks tab to display the list of the selected devices.

      The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode.

  2. To automatically provision local volumes for the discovered devices from the web console:

    1. Navigate to OperatorsInstalled Operators and select Local Storage from the list of Operators.

    2. Select Local Volume SetCreate Local Volume Set.

    3. Enter a volume set name and a storage class name.

    4. Choose All nodes or Select nodes to apply filters accordingly.

      Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes.

    5. Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create.

      A message displays after several minutes, indicating that the “Operator reconciled successfully.”

  1. Alternatively, to provision local volumes for the discovered devices from the CLI:

    1. Create an object YAML file to define the local volume set, such as local-volume-set.yaml, as shown in the following example:

      1. apiVersion: local.storage.openshift.io/v1alpha1
      2. kind: LocalVolumeSet
      3. metadata:
      4. name: example-autodetect
      5. spec:
      6. nodeSelector:
      7. nodeSelectorTerms:
      8. - matchExpressions:
      9. - key: kubernetes.io/hostname
      10. operator: In
      11. values:
      12. - worker-0
      13. - worker-1
      14. storageClassName: example-storageclass (1)
      15. volumeMode: Filesystem
      16. fsType: ext4
      17. maxDeviceCount: 10
      18. deviceInclusionSpec:
      19. deviceTypes: (2)
      20. - disk
      21. - part
      22. deviceMechanicalProperties:
      23. - NonRotational
      24. minSize: 10G
      25. maxSize: 100G
      26. models:
      27. - SAMSUNG
      28. - Crucial_CT525MX3
      29. vendors:
      30. - ATA
      31. - ST2000LM
      1Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes.
      2When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices.
    2. Create the local volume set object:

      1. $ oc apply -f local-volume-set.yaml
    3. Verify that the local persistent volumes were dynamically provisioned based on the storage class:

      1. $ oc get pv

      Example output

      1. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
      2. local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m
      3. local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m
      4. local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m

Results are deleted after they are removed from the node. Symlinks must be manually removed.

Using tolerations with Local Storage Operator pods

Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes.

You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node.

Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect. An operator allows you to leave one of these parameters empty.

Prerequisites

  • The Local Storage Operator is installed.

  • Local disks are attached to OKD nodes with a taint.

  • Tainted nodes are expected to provision local storage.

Procedure

To configure local volumes for scheduling on tainted nodes:

  1. Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example:

    1. apiVersion: "local.storage.openshift.io/v1"
    2. kind: "LocalVolume"
    3. metadata:
    4. name: "local-disks"
    5. namespace: "openshift-local-storage"
    6. spec:
    7. tolerations:
    8. - key: localstorage (1)
    9. operator: Equal (2)
    10. value: "localstorage" (3)
    11. storageClassDevices:
    12. - storageClassName: "localblock-sc"
    13. volumeMode: Block (4)
    14. devicePaths: (5)
    15. - /dev/xvdg
    1Specify the key that you added to the node.
    2Specify the Equal operator to require the key/value parameters to match. If operator is Exists, the system checks that the key exists and ignores the value. If operator is Equal, then the key and value must match.
    3Specify the value local of the tainted node.
    4The volume mode, either Filesystem or Block, defining the type of the local volumes.
    5The path containing a list of local storage devices to choose from.
  2. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example:

    1. spec:
    2. tolerations:
    3. - key: node-role.kubernetes.io/master
    4. operator: Exists

The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints.

Local Storage Operator Metrics

OKD provides the following metrics for the Local Storage Operator:

  • lso_discovery_disk_count: total number of discovered devices on each node

  • lso_lvset_provisioned_PV_count: total number of PVs created by LocalVolumeSet objects

  • lso_lvset_unmatched_disk_count: total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria

  • lso_lvset_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolumeSet object criteria

  • lso_lv_orphaned_symlink_count: number of devices with PVs that no longer match LocalVolume object criteria

  • lso_lv_provisioned_PV_count: total number of provisioned PVs for LocalVolume

To use these metrics, be sure to:

  • Enable support for monitoring when installing the Local Storage Operator.

  • When upgrading to OKD 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace.

For more information about metrics, see Managing metrics.

Deleting the Local Storage Operator resources

Removing a local volume or local volume set

Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed.

The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource.

Prerequisites

  • The persistent volume must be in a Released or Available state.

    Deleting a persistent volume that is still in use can result in data loss or corruption.

Procedure

  1. Edit the previously created local volume to remove any unwanted disks.

    1. Edit the cluster resource:

      1. $ oc edit localvolume <name> -n openshift-local-storage
    2. Navigate to the lines under devicePaths, and delete any representing unwanted disks.

  2. Delete any persistent volumes created.

    1. $ oc delete pv <pv-name>
  3. Delete any symlinks on the node.

    The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability.

    1. Create a debug pod on the node:

      1. $ oc debug node/<node-name>
    2. Change your root directory to /host:

      1. $ chroot /host
    3. Navigate to the directory containing the local volume symlinks.

      1. $ cd /mnt/openshift-local-storage/<sc-name> (1)
      1The name of the storage class used to create the local volumes.
    4. Delete the symlink belonging to the removed device.

      1. $ rm <symlink>

Uninstalling the Local Storage Operator

To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project.

Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator’s removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources.

Prerequisites

  • Access to the OKD web console.

Procedure

  1. Delete any local volume resources installed in the project, such as localvolume, localvolumeset, and localvolumediscovery:

    1. $ oc delete localvolume --all --all-namespaces
    2. $ oc delete localvolumeset --all --all-namespaces
    3. $ oc delete localvolumediscovery --all --all-namespaces
  2. Uninstall the Local Storage Operator from the web console.

    1. Log in to the OKD web console.

    2. Navigate to OperatorsInstalled Operators.

    3. Type Local Storage into the filter box to locate the Local Storage Operator.

    4. Click the Options menu kebab at the end of the Local Storage Operator.

    5. Click Uninstall Operator.

    6. Click Remove in the window that appears.

  3. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command:

    1. $ oc delete pv <pv-name>
  4. Delete the openshift-local-storage project:

    1. $ oc delete project openshift-local-storage