Persistent storage using logical volume manager storage

Logical volume manager storage (LVM Storage) uses the TopoLVM CSI driver to dynamically provision local storage on single-node OpenShift clusters.

LVM Storage creates thin-provisioned volumes using Logical Volume Manager and provides dynamic provisioning of block storage on a limited resources single-node OpenShift cluster.

Deploying LVM Storage on single-node OpenShift clusters

You can deploy LVM Storage on a single-node OpenShift bare-metal or user-provisioned infrastructure cluster and configure it to dynamically provision storage for your workloads.

LVM Storage creates a volume group using all the available unused disks and creates a single thin pool with a size of 90% of the volume group. The remaining 10% of the volume group is left free to enable data recovery by expanding the thin pool when required. You might need to manually perform such recovery.

You can use persistent volume claims (PVCs) and volume snapshots provisioned by LVM Storage to request storage and create volume snapshots.

LVM Storage configures a default overprovisioning limit of 10 to take advantage of the thin-provisioning feature. The total size of the volumes and volume snapshots that can be created on the single-node OpenShift clusters is 10 times the size of the thin pool.

You can deploy LVM Storage on single-node OpenShift clusters using one of the following:

  • Red Hat Advanced Cluster Management (RHACM)

  • OKD Web Console

Requirements

Before you begin deploying LVM Storage on single-node OpenShift clusters, ensure that the following requirements are met:

  • You have installed Red Hat Advanced Cluster Management (RHACM) on an OKD cluster.

  • Every managed single-node OpenShift cluster has dedicated disks that are used to provision storage.

Before you deploy LVM Storage on single-node OpenShift clusters, be aware of the following limitations:

  • You can only create a single instance of the LVMCluster custom resource (CR) on an OKD cluster.

  • When a device becomes part of the LVMCluster CR, it cannot be removed.

Limitations

For deploying single-node OpenShift, LVM Storage has the following limitations:

  • The total storage size is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the overprovisioning factor.

  • The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).

    • It is possible to define the size of PE and LE during the physical and logical device creation.

    • The default PE and LE size is 4 MB.

    • If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.

Table 1. Size limits for different architectures using the default PE and LE size
ArchitectureRHEL 6RHEL 7RHEL 8RHEL 9

32-bit

16 TB

-

-

-

64-bit

8 EB [1]

100 TB [2]

8 EB [1]

500 TB [2]

8 EB

8 EB

  1. Theoretical size.

  2. Tested size.

Additional resources

Installing LVM Storage with the CLI

As a cluster administrator, you can install Logical volume manager storage (LVM Storage) by using the CLI.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

Procedure

  1. Create a namespace for the LVM Storage Operator.

    1. Save the following YAML in the lvms-namespace.yaml file:

      1. apiVersion: v1
      2. kind: Namespace
      3. metadata:
      4. labels:
      5. openshift.io/cluster-monitoring: "true"
      6. pod-security.kubernetes.io/enforce: privileged
      7. pod-security.kubernetes.io/audit: privileged
      8. pod-security.kubernetes.io/warn: privileged
      9. name: openshift-storage
    2. Create the Namespace CR:

      1. $ oc create -f lvms-namespace.yaml
  2. Create an Operator group for the LVM Storage Operator.

    1. Save the following YAML in the lvms-operatorgroup.yaml file:

      1. apiVersion: operators.coreos.com/v1
      2. kind: OperatorGroup
      3. metadata:
      4. name: openshift-storage-operatorgroup
      5. namespace: openshift-storage
      6. spec:
      7. targetNamespaces:
      8. - openshift-storage
    2. Create the OperatorGroup CR:

      1. $ oc create -f lvms-operatorgroup.yaml
  3. Subscribe to the LVM Storage Operator.

    1. Save the following YAML in the lvms-sub.yaml file:

      1. apiVersion: operators.coreos.com/v1alpha1
      2. kind: Subscription
      3. metadata:
      4. name: lvms
      5. namespace: openshift-storage
      6. spec:
      7. installPlanApproval: Automatic
      8. name: lvms-operator
      9. source: redhat-operators
      10. sourceNamespace: openshift-marketplace
    2. Create the Subscription CR:

      1. $ oc create -f lvms-sub.yaml
  4. Create the LVMCluster resource:

    1. Save the following YAML in the lvmcluster.yaml file:

      1. apiVersion: lvm.topolvm.io/v1alpha1
      2. kind: LVMCluster
      3. metadata:
      4. name: my-lvmcluster
      5. namespace: openshift-storage
      6. spec:
      7. storage:
      8. deviceClasses:
      9. - name: vg1
      10. deviceSelector:
      11. paths:
      12. - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
      13. - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
      14. optionalPaths:
      15. - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
      16. - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
      17. thinPoolConfig:
      18. name: thin-pool-1
      19. sizePercent: 90
      20. overprovisionRatio: 10
      21. nodeSelector:
      22. nodeSelectorTerms:
      23. - matchExpressions:
      24. - key: app
      25. operator: In
      26. values:
      27. - test1
    2. Create the LVMCluster CR:

      1. $ oc create -f lvmcluster.yaml
  5. To verify that the Operator is installed, enter the following command:

    1. $ oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase

    Example output

    1. Name Phase
    2. 4.13.0-202301261535 Succeeded

Installing LVM Storage with the web console

You can install Logical volume manager storage (LVM Storage) by using the Red Hat OKD OperatorHub.

Prerequisites

  • You have access to the single-node OpenShift cluster.

  • You are using an account with the cluster-admin and Operator installation permissions.

Procedure

  1. Log in to the OKD Web Console.

  2. Click Operators → OperatorHub.

  3. Scroll or type LVM Storage into the Filter by keyword box to find LVM Storage.

  4. Click Install.

  5. Set the following options on the Install Operator page:

    1. Update Channel as stable-4.14.

    2. Installation Mode as A specific namespace on the cluster.

    3. Installed Namespace as Operator recommended namespace openshift-storage. If the openshift-storage namespace does not exist, it is created during the operator installation.

    4. Approval Strategy as Automatic or Manual.

      If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention.

      If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version.

  6. Click Install.

Verification steps

  • Verify that LVM Storage shows a green tick, indicating successful installation.

Uninstalling LVM Storage installed using the OpenShift Web Console

You can unstall LVM Storage using the Red Hat OpenShift Container Platform Web Console.

Prerequisites

  • You deleted all the applications on the clusters that are using the storage provisioned by LVM Storage.

  • You deleted the persistent volume claims (PVCs) and persistent volumes (PVs) provisioned using LVM Storage.

  • You deleted all volume snapshots provisioned by LVM Storage.

  • You verified that no logical volume resources exist by using the oc get logicalvolume command.

  • You have access to the single-node OpenShift cluster using an account with cluster-admin permissions.

Procedure

  1. From the OperatorsInstalled Operators page, scroll to LVM Storage or type LVM Storage into the Filter by name to find and click on it.

  2. Click on the LVMCluster tab.

  3. On the right-hand side of the LVMCluster page, select Delete LVMCluster from the Actions drop-down menu.

  4. Click on the Details tab.

  5. On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu.

  6. Select Remove. LVM Storage stops running and is completely removed.

Installing LVM Storage in a disconnected environment

You can install LVM Storage on OKD 4.14 in a disconnected environment. All sections referenced in this procedure are linked in Additional resources.

Prerequisites

  • You read the About disconnected installation mirroring section.

  • You have access to the OKD image repository.

  • You created a mirror registry.

Procedure

  1. Follow the steps in the Creating the image set configuration procedure. To create an ImageSetConfiguration resource for LVM Storage, you can use the following example YAML file:

    Example ImageSetConfiguration file for LVM Storage

    1. kind: ImageSetConfiguration
    2. apiVersion: mirror.openshift.io/v1alpha2
    3. archiveSize: 4 (1)
    4. storageConfig: (2)
    5. registry:
    6. imageURL: example.com/mirror/oc-mirror-metadata (3)
    7. skipTLS: false
    8. mirror:
    9. platform:
    10. channels:
    11. - name: stable-4.14 (4)
    12. type: ocp
    13. graph: true (5)
    14. operators:
    15. - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 (6)
    16. packages:
    17. - name: lvms-operator (7)
    18. channels:
    19. - name: stable (8)
    20. additionalImages:
    21. - name: registry.redhat.io/ubi9/ubi:latest (9)
    22. helm: {}
    1Add archiveSize to set the maximum size, in GiB, of each file within the image set.
    2Set the back-end location to save the image set metadata to. This location can be a registry or local directory. It is required to specify storageConfig values, unless you are using the Technology Preview OCI feature.
    3Set the registry URL for the storage backend.
    4Set the channel to retrieve the OKD images from.
    5Add graph: true to generate the OpenShift Update Service (OSUS) graph image to allow for an improved cluster update experience when using the web console. For more information, see About the OpenShift Update Service.
    6Set the Operator catalog to retrieve the OKD images from.
    7Specify only certain Operator packages to include in the image set. Remove this field to retrieve all packages in the catalog.
    8Specify only certain channels of the Operator packages to include in the image set. You must always include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: oc mirror list operators —catalog=<catalog_name> —package=<package_name>.
    9Specify any additional images to include in image set.
  2. Follow the procedure in the Mirroring an image set to a mirror registry section.

  3. Follow the procedure in the Configuring image registry repository mirroring section.

Additional resources

Installing LVM Storage using RHACM

LVM Storage is deployed on single-node OpenShift clusters using Red Hat Advanced Cluster Management (RHACM). You create a Policy object on RHACM that deploys and configures the Operator when it is applied to managed clusters which match the selector specified in the PlacementRule resource. The policy is also applied to clusters that are imported later and satisfy the placement rule.

Prerequisites

  • Access to the RHACM cluster using an account with cluster-admin and Operator installation permissions.

  • Dedicated disks on each single-node OpenShift cluster to be used by LVM Storage.

  • The single-node OpenShift cluster needs to be managed by RHACM, either imported or created.

Procedure

  1. Log in to the RHACM CLI using your OKD credentials.

  2. Create a namespace in which you will create policies.

    1. # oc create ns lvms-policy-ns
  3. To create a policy, save the following YAML to a file with a name such as policy-lvms-operator.yaml:

    1. apiVersion: apps.open-cluster-management.io/v1
    2. kind: PlacementRule
    3. metadata:
    4. name: placement-install-lvms
    5. spec:
    6. clusterConditions:
    7. - status: "True"
    8. type: ManagedClusterConditionAvailable
    9. clusterSelector: (1)
    10. matchExpressions:
    11. - key: mykey
    12. operator: In
    13. values:
    14. - myvalue
    15. ---
    16. apiVersion: policy.open-cluster-management.io/v1
    17. kind: PlacementBinding
    18. metadata:
    19. name: binding-install-lvms
    20. placementRef:
    21. apiGroup: apps.open-cluster-management.io
    22. kind: PlacementRule
    23. name: placement-install-lvms
    24. subjects:
    25. - apiGroup: policy.open-cluster-management.io
    26. kind: Policy
    27. name: install-lvms
    28. ---
    29. apiVersion: policy.open-cluster-management.io/v1
    30. kind: Policy
    31. metadata:
    32. annotations:
    33. policy.open-cluster-management.io/categories: CM Configuration Management
    34. policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    35. policy.open-cluster-management.io/standards: NIST SP 800-53
    36. name: install-lvms
    37. spec:
    38. disabled: false
    39. remediationAction: enforce
    40. policy-templates:
    41. - objectDefinition:
    42. apiVersion: policy.open-cluster-management.io/v1
    43. kind: ConfigurationPolicy
    44. metadata:
    45. name: install-lvms
    46. spec:
    47. object-templates:
    48. - complianceType: musthave
    49. objectDefinition:
    50. apiVersion: v1
    51. kind: Namespace
    52. metadata:
    53. labels:
    54. openshift.io/cluster-monitoring: "true"
    55. pod-security.kubernetes.io/enforce: privileged
    56. pod-security.kubernetes.io/audit: privileged
    57. pod-security.kubernetes.io/warn: privileged
    58. name: openshift-storage
    59. - complianceType: musthave
    60. objectDefinition:
    61. apiVersion: operators.coreos.com/v1
    62. kind: OperatorGroup
    63. metadata:
    64. name: openshift-storage-operatorgroup
    65. namespace: openshift-storage
    66. spec:
    67. targetNamespaces:
    68. - openshift-storage
    69. - complianceType: musthave
    70. objectDefinition:
    71. apiVersion: operators.coreos.com/v1alpha1
    72. kind: Subscription
    73. metadata:
    74. name: lvms
    75. namespace: openshift-storage
    76. spec:
    77. installPlanApproval: Automatic
    78. name: lvms-operator
    79. source: redhat-operators
    80. sourceNamespace: openshift-marketplace
    81. remediationAction: enforce
    82. severity: low
    83. - objectDefinition:
    84. apiVersion: policy.open-cluster-management.io/v1
    85. kind: ConfigurationPolicy
    86. metadata:
    87. name: lvms
    88. spec:
    89. object-templates:
    90. - complianceType: musthave
    91. objectDefinition:
    92. apiVersion: lvm.topolvm.io/v1alpha1
    93. kind: LVMCluster
    94. metadata:
    95. name: my-lvmcluster
    96. namespace: openshift-storage
    97. spec:
    98. storage:
    99. deviceClasses:
    100. - name: vg1
    101. default: true
    102. deviceSelector: (2)
    103. paths:
    104. - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
    105. - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
    106. optionalPaths:
    107. - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
    108. - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    109. thinPoolConfig:
    110. name: thin-pool-1
    111. sizePercent: 90
    112. overprovisionRatio: 10
    113. nodeSelector: (3)
    114. nodeSelectorTerms:
    115. - matchExpressions:
    116. - key: app
    117. operator: In
    118. values:
    119. - test1
    120. remediationAction: enforce
    121. severity: low
    1Replace the key and value in PlacementRule.spec.clusterSelector to match the labels set on the single-node OpenShift clusters on which you want to install LVM Storage.
    2Optional. To control or restrict the volume group to your preferred devices, you can manually specify the local paths of the devices in the deviceSelector section of the LVMCluster YAML. The paths section refers to devices the LVMCluster adds, which means those paths must exist. The optionalPaths section refers to devices the LVMCluster might add. You must specify at least one of paths or optionalPaths when specifying the deviceSelector section. If you specify paths, it is not mandatory to specify optionalPaths. If you specify optionalPaths, it is not mandatory to specify paths but at least one optional path must be present on the node. If you do not specify any paths, it will add all unused devices on the node.
    3To add a node filter, which is a subset of the additional worker nodes, specify the required filter in the nodeSelector section. LVM Storage detects and uses the additional worker nodes when the new nodes show up.

    This nodeSelector node filter matching is not the same as the pod label matching.

  4. Create the policy in the namespace by running the following command:

    1. # oc create -f policy-lvms-operator.yaml -n lvms-policy-ns (1)
    1The policy-lvms-operator.yaml is the name of the file to which the policy is saved.

    This creates a Policy, a PlacementRule, and a PlacementBinding object in the lvms-policy-ns namespace. The policy creates a Namespace, OperatorGroup, Subscription, and LVMCluster resource on the clusters that match the placement rule. This deploys the Operator on the single-node OpenShift clusters which match the selection criteria and configures it to set up the required resources to provision storage. The Operator uses all the disks specified in the LVMCluster CR. If no disks are specified, the Operator uses all the unused disks on the single-node OpenShift node.

    After a device is added to the LVMCluster, it cannot be removed.

Additional resources

Uninstalling LVM Storage installed using RHACM

To uninstall LVM Storage that you installed using RHACM, you need to delete the RHACM policy that you created for deploying and configuring the Operator.

When you delete the RHACM policy, the resources that the policy has created are not removed. You need to create additional policies to remove the resources.

As the created resources are not removed when you delete the policy, you need to perform the following steps:

  1. Remove all the Persistent volume claims (PVCs) and volume snapshots provisioned by LVM Storage.

  2. Remove the LVMCluster resources to clean up Logical Volume Manager resources created on the disks.

  3. Create an additional policy to uninstall the Operator.

Prerequisites

  • Ensure that the following are deleted before deleting the policy:

    • All the applications on the managed clusters that are using the storage provisioned by LVM Storage.

    • PVCs and persistent volumes (PVs) provisioned using LVM Storage.

    • All volume snapshots provisioned by LVM Storage.

  • Ensure you have access to the RHACM cluster using an account with a cluster-admin role.

Procedure

  1. In the OpenShift CLI (oc), delete the RHACM policy that you created for deploying and configuring LVM Storage on the hub cluster by using the following command:

    1. # oc delete -f policy-lvms-operator.yaml -n lvms-policy-ns (1)
    1The policy-lvms-operator.yaml is the name of the file to which the policy was saved.
  2. To create a policy for removing the LVMCluster resource, save the following YAML to a file with a name such as lvms-remove-policy.yaml. This enables the Operator to clean up all Logical Volume Manager resources that it created on the cluster.

    1. apiVersion: policy.open-cluster-management.io/v1
    2. kind: Policy
    3. metadata:
    4. name: policy-lvmcluster-delete
    5. annotations:
    6. policy.open-cluster-management.io/standards: NIST SP 800-53
    7. policy.open-cluster-management.io/categories: CM Configuration Management
    8. policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    9. spec:
    10. remediationAction: enforce
    11. disabled: false
    12. policy-templates:
    13. - objectDefinition:
    14. apiVersion: policy.open-cluster-management.io/v1
    15. kind: ConfigurationPolicy
    16. metadata:
    17. name: policy-lvmcluster-removal
    18. spec:
    19. remediationAction: enforce (1)
    20. severity: low
    21. object-templates:
    22. - complianceType: mustnothave
    23. objectDefinition:
    24. kind: LVMCluster
    25. apiVersion: lvm.topolvm.io/v1alpha1
    26. metadata:
    27. name: my-lvmcluster
    28. namespace: openshift-storage (2)
    29. ---
    30. apiVersion: policy.open-cluster-management.io/v1
    31. kind: PlacementBinding
    32. metadata:
    33. name: binding-policy-lvmcluster-delete
    34. placementRef:
    35. apiGroup: apps.open-cluster-management.io
    36. kind: PlacementRule
    37. name: placement-policy-lvmcluster-delete
    38. subjects:
    39. - apiGroup: policy.open-cluster-management.io
    40. kind: Policy
    41. name: policy-lvmcluster-delete
    42. ---
    43. apiVersion: apps.open-cluster-management.io/v1
    44. kind: PlacementRule
    45. metadata:
    46. name: placement-policy-lvmcluster-delete
    47. spec:
    48. clusterConditions:
    49. - status: "True"
    50. type: ManagedClusterConditionAvailable
    51. clusterSelector:
    52. matchExpressions:
    53. - key: mykey
    54. operator: In
    55. values:
    56. - myvalue
    1The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
    2This namespace field must have the openshift-storage value.
  3. Set the value of the PlacementRule.spec.clusterSelector field to select the clusters from which to uninstall LVM Storage.

  4. Create the policy by running the following command:

    1. # oc create -f lvms-remove-policy.yaml -n lvms-policy-ns
  5. To create a policy to check if the LVMCluster CR has been removed, save the following YAML to a file with a name such as check-lvms-remove-policy.yaml:

    1. apiVersion: policy.open-cluster-management.io/v1
    2. kind: Policy
    3. metadata:
    4. name: policy-lvmcluster-inform
    5. annotations:
    6. policy.open-cluster-management.io/standards: NIST SP 800-53
    7. policy.open-cluster-management.io/categories: CM Configuration Management
    8. policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    9. spec:
    10. remediationAction: inform
    11. disabled: false
    12. policy-templates:
    13. - objectDefinition:
    14. apiVersion: policy.open-cluster-management.io/v1
    15. kind: ConfigurationPolicy
    16. metadata:
    17. name: policy-lvmcluster-removal-inform
    18. spec:
    19. remediationAction: inform (1)
    20. severity: low
    21. object-templates:
    22. - complianceType: mustnothave
    23. objectDefinition:
    24. kind: LVMCluster
    25. apiVersion: lvm.topolvm.io/v1alpha1
    26. metadata:
    27. name: my-lvmcluster
    28. namespace: openshift-storage (2)
    29. ---
    30. apiVersion: policy.open-cluster-management.io/v1
    31. kind: PlacementBinding
    32. metadata:
    33. name: binding-policy-lvmcluster-check
    34. placementRef:
    35. apiGroup: apps.open-cluster-management.io
    36. kind: PlacementRule
    37. name: placement-policy-lvmcluster-check
    38. subjects:
    39. - apiGroup: policy.open-cluster-management.io
    40. kind: Policy
    41. name: policy-lvmcluster-inform
    42. ---
    43. apiVersion: apps.open-cluster-management.io/v1
    44. kind: PlacementRule
    45. metadata:
    46. name: placement-policy-lvmcluster-check
    47. spec:
    48. clusterConditions:
    49. - status: "True"
    50. type: ManagedClusterConditionAvailable
    51. clusterSelector:
    52. matchExpressions:
    53. - key: mykey
    54. operator: In
    55. values:
    56. - myvalue
    1The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction.
    2The namespace field must have the openshift-storage value.
  6. Create the policy by running the following command:

    1. # oc create -f check-lvms-remove-policy.yaml -n lvms-policy-ns
  7. Check the policy status by running the following command:

    1. # oc get policy -n lvms-policy-ns

    Example output

    1. NAME REMEDIATION ACTION COMPLIANCE STATE AGE
    2. policy-lvmcluster-delete enforce Compliant 15m
    3. policy-lvmcluster-inform inform Compliant 15m
  8. After both the policies are compliant, save the following YAML to a file with a name such as lvms-uninstall-policy.yaml to create a policy to uninstall LVM Storage.

    1. apiVersion: apps.open-cluster-management.io/v1
    2. kind: PlacementRule
    3. metadata:
    4. name: placement-uninstall-lvms
    5. spec:
    6. clusterConditions:
    7. - status: "True"
    8. type: ManagedClusterConditionAvailable
    9. clusterSelector:
    10. matchExpressions:
    11. - key: mykey
    12. operator: In
    13. values:
    14. - myvalue
    15. ---
    16. apiVersion: policy.open-cluster-management.io/v1
    17. kind: PlacementBinding
    18. metadata:
    19. name: binding-uninstall-lvms
    20. placementRef:
    21. apiGroup: apps.open-cluster-management.io
    22. kind: PlacementRule
    23. name: placement-uninstall-lvms
    24. subjects:
    25. - apiGroup: policy.open-cluster-management.io
    26. kind: Policy
    27. name: uninstall-lvms
    28. ---
    29. apiVersion: policy.open-cluster-management.io/v1
    30. kind: Policy
    31. metadata:
    32. annotations:
    33. policy.open-cluster-management.io/categories: CM Configuration Management
    34. policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    35. policy.open-cluster-management.io/standards: NIST SP 800-53
    36. name: uninstall-lvms
    37. spec:
    38. disabled: false
    39. policy-templates:
    40. - objectDefinition:
    41. apiVersion: policy.open-cluster-management.io/v1
    42. kind: ConfigurationPolicy
    43. metadata:
    44. name: uninstall-lvms
    45. spec:
    46. object-templates:
    47. - complianceType: mustnothave
    48. objectDefinition:
    49. apiVersion: v1
    50. kind: Namespace
    51. metadata:
    52. name: openshift-storage
    53. - complianceType: mustnothave
    54. objectDefinition:
    55. apiVersion: operators.coreos.com/v1
    56. kind: OperatorGroup
    57. metadata:
    58. name: openshift-storage-operatorgroup
    59. namespace: openshift-storage
    60. spec:
    61. targetNamespaces:
    62. - openshift-storage
    63. - complianceType: mustnothave
    64. objectDefinition:
    65. apiVersion: operators.coreos.com/v1alpha1
    66. kind: Subscription
    67. metadata:
    68. name: lvms-operator
    69. namespace: openshift-storage
    70. remediationAction: enforce
    71. severity: low
    72. - objectDefinition:
    73. apiVersion: policy.open-cluster-management.io/v1
    74. kind: ConfigurationPolicy
    75. metadata:
    76. name: policy-remove-lvms-crds
    77. spec:
    78. object-templates:
    79. - complianceType: mustnothave
    80. objectDefinition:
    81. apiVersion: apiextensions.k8s.io/v1
    82. kind: CustomResourceDefinition
    83. metadata:
    84. name: logicalvolumes.topolvm.io
    85. - complianceType: mustnothave
    86. objectDefinition:
    87. apiVersion: apiextensions.k8s.io/v1
    88. kind: CustomResourceDefinition
    89. metadata:
    90. name: lvmclusters.lvm.topolvm.io
    91. - complianceType: mustnothave
    92. objectDefinition:
    93. apiVersion: apiextensions.k8s.io/v1
    94. kind: CustomResourceDefinition
    95. metadata:
    96. name: lvmvolumegroupnodestatuses.lvm.topolvm.io
    97. - complianceType: mustnothave
    98. objectDefinition:
    99. apiVersion: apiextensions.k8s.io/v1
    100. kind: CustomResourceDefinition
    101. metadata:
    102. name: lvmvolumegroups.lvm.topolvm.io
    103. remediationAction: enforce
    104. severity: high
  9. Create the policy by running the following command:

    1. # oc create -f lvms-uninstall-policy.yaml -ns lvms-policy-ns

Additional resources

Creating a Logical Volume Manager cluster on a single-node OpenShift worker node

You can configure a single-node OpenShift worker node as a Logical Volume Manager cluster. On the control-plane single-node OpenShift node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster.

When you create a Logical Volume Manager cluster, StorageClass and LVMVolumeGroup resources work together to provide dynamic provisioning of storage. StorageClass CRs define the properties of the storage that you can dynamically provision. LVMVolumeGroup is a specific type of persistent volume (PV) that is backed by an LVM Volume Group. LVMVolumeGroup CRs provide the back-end storage for the persistent volumes that you create.

Perform the following procedure to create a Logical Volume Manager cluster on a single-node OpenShift worker node.

You also can perform the same task by using the OKD web console.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

  • You installed LVM Storage in a single-node OpenShift cluster and have installed a worker node for use in the single-node OpenShift cluster.

Procedure

  1. Create the LVMCluster custom resource (CR).

    1. Save the following YAML in the lvmcluster.yaml file:

      1. apiVersion: lvm.topolvm.io/v1alpha1
      2. kind: LVMCluster
      3. metadata:
      4. name: lvmcluster
      5. spec:
      6. storage:
      7. deviceClasses: (1)
      8. - name: vg1
      9. fstype: ext4 (2)
      10. default: true (3)
      11. deviceSelector: (4)
      12. paths:
      13. - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
      14. - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
      15. optionalPaths:
      16. - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
      17. - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
      18. thinPoolConfig:
      19. name: thin-pool-1
      20. sizePercent: 90
      21. overprovisionRatio: 10
      22. nodeSelector: (5)
      23. nodeSelectorTerms:
      24. - matchExpressions:
      25. - key: app
      26. operator: In
      27. values:
      28. - test1
      1To create multiple device storage classes in the cluster, create a YAML array under deviceClasses for each required storage class. Configure the local device paths of the disks as an array of values in the deviceSelector field. When configuring multiple device classes, you must specify the device path for each device.
      2Set fstype to ext4 or xfs. By default, it is set to xfs if the setting is not specified.
      3Mandatory: The LVMCluster resource must contain a single default storage class. Set default: false for secondary device storage classes. If you are upgrading the LVMCluster resource from a previous version, you must specify a single default device class.
      4Optional. To control or restrict the volume group to your preferred devices, you can manually specify the local paths of the devices in the deviceSelector section of the LVMCluster YAML. The paths section refers to devices the LVMCluster adds, which means those paths must exist. The optionalPaths section refers to devices the LVMCluster might add. You must specify at least one of paths or optionalPaths when specifying the deviceSelector section. If you specify paths, it is not mandatory to specify optionalPaths. If you specify optionalPaths, it is not mandatory to specify paths but at least one optional path must be present on the node. If you do not specify any paths, it will add all unused devices on the node.
      5Optional: To control what worker nodes the LVMCluster CR is applied to, specify a set of node selector labels. The specified labels must be present on the node in order for the LVMCluster to be scheduled on that node.
    2. Create the LVMCluster CR:

      1. $ oc create -f lvmcluster.yaml

      Example output

      1. lvmcluster/lvmcluster created

      The LVMCluster resource creates the following system-managed CRs:

      LVMVolumeGroup

      Tracks individual volume groups across multiple nodes.

      LVMVolumeGroupNodeStatus

      Tracks the status of the volume groups on a node.

Verification

Verify that the LVMCluster resource has created the StorageClass, LVMVolumeGroup, and LVMVolumeGroupNodeStatus CRs.

LVMVolumeGroup and LVMVolumeGroupNodeStatus are managed by LVM Storage. Do not edit these CRs directly.

  1. Check that the LVMCluster CR is in a ready state by running the following command:

    1. $ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status.deviceClassStatuses[*]}'

    Example output

    1. {
    2. "name": "vg1",
    3. "nodeStatus": [
    4. {
    5. "devices": [
    6. "/dev/nvme0n1",
    7. "/dev/nvme1n1",
    8. "/dev/nvme2n1"
    9. ],
    10. "node": "kube-node",
    11. "status": "Ready"
    12. }
    13. ]
    14. }
  2. Check that the storage class is created:

    1. $ oc get storageclass

    Example output

    1. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    2. lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m
  3. Check that the volume snapshot class is created:

    1. $ oc get volumesnapshotclass

    Example output

    1. NAME DRIVER DELETIONPOLICY AGE
    2. lvms-vg1 topolvm.io Delete 24h
  4. Check that the LVMVolumeGroup resource is created:

    1. $ oc get lvmvolumegroup vg1 -o yaml

    Example output

    1. apiVersion: lvm.topolvm.io/v1alpha1
    2. kind: LVMVolumeGroup
    3. metadata:
    4. creationTimestamp: "2022-02-02T05:16:42Z"
    5. generation: 1
    6. name: vg1
    7. namespace: lvm-operator-system
    8. resourceVersion: "17242461"
    9. uid: 88e8ad7d-1544-41fb-9a8e-12b1a66ab157
    10. spec: {}
  5. Check that the LVMVolumeGroupNodeStatus resource is created:

    1. $ oc get lvmvolumegroupnodestatuses.lvm.topolvm.io kube-node -o yaml

    Example output

    1. apiVersion: lvm.topolvm.io/v1alpha1
    2. kind: LVMVolumeGroupNodeStatus
    3. metadata:
    4. creationTimestamp: "2022-02-02T05:17:59Z"
    5. generation: 1
    6. name: kube-node
    7. namespace: lvm-operator-system
    8. resourceVersion: "17242882"
    9. uid: 292de9bb-3a9b-4ee8-946a-9b587986dafd
    10. spec:
    11. nodeStatus:
    12. - devices:
    13. - /dev/nvme0n1
    14. - /dev/nvme1n1
    15. - /dev/nvme2n1
    16. name: vg1
    17. status: Ready

Additional resources

Adding a storage class

You can add a storage class to an OKD cluster. A storage class describes a class of storage in the cluster and how the cluster dynamically provisions the persistent volumes (PVs) when the user specifies the storage class. A storage class describes the type of device classes, the quality-of-service level, the filesystem type, and other details.

Procedure

  1. Create a YAML file:

    1. apiVersion: storage.k8s.io/v1
    2. kind: StorageClass
    3. metadata:
    4. name: lvm-storageclass
    5. parameters:
    6. csi.storage.k8s.io/fstype: ext4
    7. topolvm.io/device-class: vg1
    8. provisioner: topolvm.io
    9. reclaimPolicy: Delete
    10. allowVolumeExpansion: true
    11. volumeBindingMode: WaitForFirstConsumer

    Save the file by using a name similar to the storage class name. For example, lvm-storageclass.yaml.

  2. Apply the YAML file by using the oc command:

    1. $ oc apply -f <file_name> (1)
    1Replace <file_name> with the name of the YAML file. For example, lvm-storageclass.yaml.

    The cluster will create the storage class.

  3. Verify that the cluster created the storage class by using the following command:

    1. $ oc get storageclass <name> (1)
    1Replace <name> with the name of the storage class. For example, lvm-storageclass.

    Example output

    1. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    2. lvm-storageclass topolvm.io Delete WaitForFirstConsumer true 1s

Provisioning storage using LVM Storage

You can provision persistent volume claims (PVCs) using the storage class that is created during the Operator installation. You can provision block and file PVCs, however, the storage is allocated only when a pod that uses the PVC is created.

LVM Storage provisions PVCs in units of 1 GiB. The requested storage is rounded up to the nearest GiB.

Procedure

  1. Identify the StorageClass that is created when LVM Storage is deployed.

    The StorageClass name is in the format, lvms-<device-class-name>. The device-class-name is the name of the device class that you provided in the LVMCluster of the Policy YAML. For example, if the deviceClass is called vg1, then the storageClass name is lvms-vg1.

    The volumeBindingMode of the storage class is set to WaitForFirstConsumer.

  2. To create a PVC where the application requires storage, save the following YAML to a file with a name such as pvc.yaml.

    Example YAML to create a PVC

    1. # block pvc
    2. apiVersion: v1
    3. kind: PersistentVolumeClaim
    4. metadata:
    5. name: lvm-block-1
    6. namespace: default
    7. spec:
    8. accessModes:
    9. - ReadWriteOnce
    10. volumeMode: Block
    11. resources:
    12. requests:
    13. storage: 10Gi
    14. storageClassName: lvms-vg1
    15. ---
    16. # file pvc
    17. apiVersion: v1
    18. kind: PersistentVolumeClaim
    19. metadata:
    20. name: lvm-file-1
    21. namespace: default
    22. spec:
    23. accessModes:
    24. - ReadWriteOnce
    25. volumeMode: Filesystem
    26. resources:
    27. requests:
    28. storage: 10Gi
    29. storageClassName: lvms-vg1
  3. Create the PVC by running the following command:

    1. # oc create -f pvc.yaml -ns <application_namespace>

    The created PVCs remain in pending state until you deploy the pods that use them.

Monitoring LVM Storage

When LVM Storage is installed using the OKD Web Console, you can monitor the cluster by using the Block and File dashboard in the console by default. However, when you use RHACM to install LVM Storage, you need to configure RHACM Observability to monitor all the single-node OpenShift clusters from one place.

Metrics

You can monitor LVM Storage by viewing the metrics exported by the Operator on the RHACM dashboards and the alerts that are triggered.

  • Add the following topolvm metrics to the allow list:

    1. topolvm_thinpool_data_percent
    2. topolvm_thinpool_metadata_percent
    3. topolvm_thinpool_size_bytes

Metrics are updated every 10 minutes or when there is a change in the thin pool, such as a new logical volume creation.

Alerts

When the thin pool and volume group are filled up, further operations fail and might lead to data loss. LVM Storage sends the following alerts about the usage of the thin pool and volume group when utilization crosses a certain value:

Alerts for Logical Volume Manager cluster in RHACM

AlertDescription

VolumeGroupUsageAtThresholdNearFull

This alert is triggered when both the volume group and thin pool utilization cross 75% on nodes. Data deletion or volume group expansion is required.

VolumeGroupUsageAtThresholdCritical

This alert is triggered when both the volume group and thin pool utilization cross 85% on nodes. VolumeGroup is critically full. Data deletion or volume group expansion is required.

ThinPoolDataUsageAtThresholdNearFull

This alert is triggered when the thin pool data utilization in the volume group crosses 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolDataUsageAtThresholdCritical

This alert is triggered when the thin pool data utilization in the volume group crosses 85% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdNearFull

This alert is triggered when the thin pool metadata utilization in the volume group crosses 75% on nodes. Data deletion or thin pool expansion is required.

ThinPoolMetaDataUsageAtThresholdCritical

This alert is triggered when the thin pool metadata utilization in the volume group crosses 85% on nodes. Data deletion or thin pool expansion is required.

Additional resources

Scaling storage of single-node OpenShift clusters

The OKD supports additional worker nodes for single-node OpenShift clusters on bare-metal user-provisioned infrastructure. LVM Storage detects and uses the new additional worker nodes when the nodes show up.

Additional resources

Scaling up storage by adding capacity to your single-node OpenShift cluster

To scale the storage capacity of your configured worker nodes on a single-node OpenShift cluster, you can increase the capacity by adding disks.

Prerequisites

  • You have additional unused disks on each single-node OpenShift cluster to be used by LVM Storage.

Procedure

  1. Log in to OKD console of the single-node OpenShift cluster.

  2. From the OperatorsInstalled Operators page, click on the LVM Storage Operator in the openshift-storage namespace.

  3. Click on the LVMCluster tab to list the LVMCluster CR created on the cluster.

  4. Select Edit LVMCluster from the Actions drop-down menu.

  5. Click on the YAML tab.

  6. Edit the LVMCluster CR YAML to add the new device path in the deviceSelector section:

    In case the deviceSelector field is not included during the LVMCluster creation, it is not possible to add the deviceSelector section to the CR. You need to remove the LVMCluster and then create a new CR.

    1. apiVersion: lvm.topolvm.io/v1alpha1
    2. kind: LVMCluster
    3. metadata:
    4. name: my-lvmcluster
    5. spec:
    6. storage:
    7. deviceClasses:
    8. - name: vg1
    9. default: true
    10. deviceSelector: (1)
    11. paths:
    12. - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
    13. - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
    14. optionalPaths:
    15. - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
    16. - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    17. thinPoolConfig:
    18. name: thin-pool-1
    19. sizePercent: 90
    20. overprovisionRatio: 10
    1Optional. To control or restrict the volume group to your preferred devices, you can manually specify the local paths of the devices in the deviceSelector section of the LVMCluster YAML. The paths section refers to devices the LVMCluster adds, which means those paths must exist. The optionalPaths section refers to devices the LVMCluster might add. You must specify at least one of paths or optionalPaths when specifying the deviceSelector section. If you specify paths, it is not mandatory to specify optionalPaths. If you specify optionalPaths, it is not mandatory to specify paths but at least one optional path must be present on the node. If you do not specify any paths, it will add all unused devices on the node.

Additional resources

Scaling up storage by adding capacity to your single-node OpenShift cluster using RHACM

You can scale the storage capacity of your configured worker nodes on a single-node OpenShift cluster using RHACM.

Prerequisites

  • You have access to the RHACM cluster using an account with cluster-admin privileges.

  • You have additional unused devices on each single-node OpenShift cluster that LVM Storage can use.

Procedure

  1. Log in to the RHACM CLI using your OKD credentials.

  2. Find the device that you want to add. The device to be added needs to match with the device name and path of the existing devices.

  3. To add capacity to the single-node OpenShift cluster, edit the deviceSelector section of the existing policy YAML, for example, policy-lvms-operator.yaml.

    In case the deviceSelector field is not included during the LVMCluster creation, it is not possible to add the deviceSelector section to the CR. You need to remove the LVMCluster and then recreate it from the new CR.

    1. apiVersion: apps.open-cluster-management.io/v1
    2. kind: PlacementRule
    3. metadata:
    4. name: placement-install-lvms
    5. spec:
    6. clusterConditions:
    7. - status: "True"
    8. type: ManagedClusterConditionAvailable
    9. clusterSelector:
    10. matchExpressions:
    11. - key: mykey
    12. operator: In
    13. values:
    14. - myvalue
    15. ---
    16. apiVersion: policy.open-cluster-management.io/v1
    17. kind: PlacementBinding
    18. metadata:
    19. name: binding-install-lvms
    20. placementRef:
    21. apiGroup: apps.open-cluster-management.io
    22. kind: PlacementRule
    23. name: placement-install-lvms
    24. subjects:
    25. - apiGroup: policy.open-cluster-management.io
    26. kind: Policy
    27. name: install-lvms
    28. ---
    29. apiVersion: policy.open-cluster-management.io/v1
    30. kind: Policy
    31. metadata:
    32. annotations:
    33. policy.open-cluster-management.io/categories: CM Configuration Management
    34. policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
    35. policy.open-cluster-management.io/standards: NIST SP 800-53
    36. name: install-lvms
    37. spec:
    38. disabled: false
    39. remediationAction: enforce
    40. policy-templates:
    41. - objectDefinition:
    42. apiVersion: policy.open-cluster-management.io/v1
    43. kind: ConfigurationPolicy
    44. metadata:
    45. name: install-lvms
    46. spec:
    47. object-templates:
    48. - complianceType: musthave
    49. objectDefinition:
    50. apiVersion: v1
    51. kind: Namespace
    52. metadata:
    53. labels:
    54. openshift.io/cluster-monitoring: "true"
    55. pod-security.kubernetes.io/enforce: privileged
    56. pod-security.kubernetes.io/audit: privileged
    57. pod-security.kubernetes.io/warn: privileged
    58. name: openshift-storage
    59. - complianceType: musthave
    60. objectDefinition:
    61. apiVersion: operators.coreos.com/v1
    62. kind: OperatorGroup
    63. metadata:
    64. name: openshift-storage-operatorgroup
    65. namespace: openshift-storage
    66. spec:
    67. targetNamespaces:
    68. - openshift-storage
    69. - complianceType: musthave
    70. objectDefinition:
    71. apiVersion: operators.coreos.com/v1alpha1
    72. kind: Subscription
    73. metadata:
    74. name: lvms
    75. namespace: openshift-storage
    76. spec:
    77. installPlanApproval: Automatic
    78. name: lvms-operator
    79. source: redhat-operators
    80. sourceNamespace: openshift-marketplace
    81. remediationAction: enforce
    82. severity: low
    83. - objectDefinition:
    84. apiVersion: policy.open-cluster-management.io/v1
    85. kind: ConfigurationPolicy
    86. metadata:
    87. name: lvms
    88. spec:
    89. object-templates:
    90. - complianceType: musthave
    91. objectDefinition:
    92. apiVersion: lvm.topolvm.io/v1alpha1
    93. kind: LVMCluster
    94. metadata:
    95. name: my-lvmcluster
    96. namespace: openshift-storage
    97. spec:
    98. storage:
    99. deviceClasses:
    100. - name: vg1
    101. default: true
    102. deviceSelector: (1)
    103. paths:
    104. - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
    105. - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
    106. optionalPaths:
    107. - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
    108. - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
    109. thinPoolConfig:
    110. name: thin-pool-1
    111. sizePercent: 90
    112. overprovisionRatio: 10
    113. nodeSelector:
    114. nodeSelectorTerms:
    115. - matchExpressions:
    116. - key: app
    117. operator: In
    118. values:
    119. - test1
    120. remediationAction: enforce
    121. severity: low
    1Optional. To control or restrict the volume group to your preferred devices, you can manually specify the local paths of the devices in the deviceSelector section of the LVMCluster YAML. The paths section refers to devices the LVMCluster adds, which means those paths must exist. The optionalPaths section refers to devices the LVMCluster might add. You must specify at least one of paths or optionalPaths when specifying the deviceSelector section. If you specify paths, it is not mandatory to specify optionalPaths. If you specify optionalPaths, it is not mandatory to specify paths but at least one optional path must be present on the node. If you do not specify any paths, it will add all unused devices on the node.
  4. Edit the policy by running the following command:

    1. # oc edit -f policy-lvms-operator.yaml -ns lvms-policy-ns (1)
    1The policy-lvms-operator.yaml is the name of the existing policy.

    This uses the new disk specified in the LVMCluster CR to provision storage.

Additional resources

Expanding PVCs

To leverage the new storage after adding additional capacity, you can expand existing persistent volume claims (PVCs) with LVM Storage.

Prerequisites

  • Dynamic provisioning is used.

  • The controlling StorageClass object has allowVolumeExpansion set to true.

Procedure

  1. Modify the .spec.resources.requests.storage field in the desired PVC resource to the new size by running the following command:

    1. oc patch <pvc_name> -n <application_namespace> -p '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'
  2. Watch the status.conditions field of the PVC to see if the resize has completed. OKD adds the Resizing condition to the PVC during expansion, which is removed after the expansion completes.

Additional resources

Upgrading LVM Storage on single-node OpenShift clusters

Currently, it is not possible to upgrade from OpenShift Data Foundation Logical Volume Manager Operator 4.11 to LVM Storage 4.12 on single-node OpenShift clusters.

The data will not be preserved during this process.

Procedure

  1. Back up any data that you want to preserve on the persistent volume claims (PVCs).

  2. Delete all PVCs provisioned by the OpenShift Data Foundation Logical Volume Manager Operator and their pods.

  3. Reinstall LVM Storage on OKD 4.12.

  4. Recreate the workloads.

  5. Copy the backup data to the PVCs created after upgrading to 4.12.

Volume snapshots for single-node OpenShift

You can take volume snapshots of persistent volumes (PVs) that are provisioned by LVM Storage. You can also create volume snapshots of the cloned volumes. Volume snapshots help you to do the following:

  • Back up your application data.

    Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you need to move the snapshots to a secure location. You can use OpenShift API for Data Protection backup and restore solutions.

  • Revert to a state at which the volume snapshot was taken.

Additional resources

Creating volume snapshots in single-node OpenShift

You can create volume snapshots based on the available capacity of the thin pool and the overprovisioning limits. LVM Storage creates a VolumeSnapshotClass with the lvms-<deviceclass-name> name.

Prerequisites

  • You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot.

  • You stopped all the I/O to the PVC before taking the snapshot.

Procedure

  1. Log in to the single-node OpenShift for which you need to run the oc command.

  2. Save the following YAML to a file with a name such as lvms-vol-snapshot.yaml.

    Example YAML to create a volume snapshot

    1. apiVersion: snapshot.storage.k8s.io/v1
    2. kind: VolumeSnapshot
    3. metadata:
    4. name: lvm-block-1-snap
    5. spec:
    6. volumeSnapshotClassName: lvms-vg1
    7. source:
    8. persistentVolumeClaimName: lvm-block-1
  3. Create the snapshot by running the following command in the same namespace as the PVC:

    1. # oc create -f lvms-vol-snapshot.yaml

A read-only copy of the PVC is created as a volume snapshot.

Restoring volume snapshots in single-node OpenShift

When you restore a volume snapshot, a new persistent volume claim (PVC) is created. The restored PVC is independent of the volume snapshot and the source PVC.

Prerequisites

  • The storage class must be the same as that of the source PVC.

  • The size of the requested PVC must be the same as that of the source volume of the snapshot.

    A snapshot must be restored to a PVC of the same size as the source volume of the snapshot. If a larger PVC is required, you can resize the PVC after the snapshot is restored successfully.

Procedure

  1. Identify the storage class name of the source PVC and volume snapshot name.

  2. Save the following YAML to a file with a name such as lvms-vol-restore.yaml to restore the snapshot.

    Example YAML to restore a PVC.

    1. kind: PersistentVolumeClaim
    2. apiVersion: v1
    3. metadata:
    4. name: lvm-block-1-restore
    5. spec:
    6. accessModes:
    7. - ReadWriteOnce
    8. volumeMode: Block
    9. Resources:
    10. Requests:
    11. storage: 2Gi
    12. storageClassName: lvms-vg1
    13. dataSource:
    14. name: lvm-block-1-snap
    15. kind: VolumeSnapshot
    16. apiGroup: snapshot.storage.k8s.io
  3. Create the policy by running the following command in the same namespace as the snapshot:

    1. # oc create -f lvms-vol-restore.yaml

Deleting volume snapshots in single-node OpenShift

You can delete volume snapshots resources and persistent volume claims (PVCs).

Procedure

  1. Delete the volume snapshot resource by running the following command:

    1. # oc delete volumesnapshot <volume_snapshot_name> -n <namespace>

    When you delete a persistent volume claim (PVC), the snapshots of the PVC are not deleted.

  2. To delete the restored volume snapshot, delete the PVC that was created to restore the volume snapshot by running the following command:

    1. # oc delete pvc <pvc_name> -n <namespace>

Volume cloning for single-node OpenShift

A clone is a duplicate of an existing storage volume that can be used like any standard volume.

Creating volume clones in single-node OpenShift

You create a clone of a volume to make a point-in-time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size.

The cloned PVC has write access.

Prerequisites

  • You ensured that the PVC is in Bound state. This is required for a consistent snapshot.

  • You ensured that the StorageClass is the same as that of the source PVC.

Procedure

  1. Identify the storage class of the source PVC.

  2. To create a volume clone, save the following YAML to a file with a name such as lvms-vol-clone.yaml:

    Example YAML to clone a volume

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. Metadata:
    4. name: lvm-block-1-clone
    5. Spec:
    6. storageClassName: lvms-vg1
    7. dataSource:
    8. name: lvm-block-1
    9. kind: PersistentVolumeClaim
    10. accessModes:
    11. - ReadWriteOnce
    12. volumeMode: Block
    13. Resources:
    14. Requests:
    15. storage: 2Gi
  3. Create the policy in the same namespace as the source PVC by running the following command:

    1. # oc create -f lvms-vol-clone.yaml

Deleting cloned volumes in single-node OpenShift

You can delete cloned volumes.

Procedure

  • To delete the cloned volume, delete the cloned PVC by running the following command:

    1. # oc delete pvc <clone_pvc_name> -n <namespace>

Downloading log files and diagnostic information using must-gather

When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.

  • Run the must-gather command from the client connected to LVM Storage cluster by running the following command:

    1. $ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.14 --dest-dir=<directory-name>

Additional resources

LVM Storage reference YAML file

The sample LVMCluster custom resource (CR) describes all the fields in the YAML file.

Example LVMCluster CR

  1. apiVersion: lvm.topolvm.io/v1alpha1
  2. kind: LVMCluster
  3. metadata:
  4. name: my-lvmcluster
  5. spec:
  6. tolerations:
  7. - effect: NoSchedule
  8. key: xyz
  9. operator: Equal
  10. value: "true"
  11. storage:
  12. deviceClasses: (1)
  13. - name: vg1 (2)
  14. default: true
  15. nodeSelector: (3)
  16. nodeSelectorTerms: (4)
  17. - matchExpressions:
  18. - key: mykey
  19. operator: In
  20. values:
  21. - ssd
  22. deviceSelector: (5)
  23. paths:
  24. - /dev/disk/by-path/pci-0000:87:00.0-nvme-1
  25. - /dev/disk/by-path/pci-0000:88:00.0-nvme-1
  26. optionalPaths:
  27. - /dev/disk/by-path/pci-0000:89:00.0-nvme-1
  28. - /dev/disk/by-path/pci-0000:90:00.0-nvme-1
  29. thinPoolConfig: (6)
  30. name: thin-pool-1 (7)
  31. sizePercent: 90 (8)
  32. overprovisionRatio: 10 (9)
  33. status:
  34. deviceClassStatuses: (10)
  35. - name: vg1
  36. nodeStatus: (11)
  37. - devices: (12)
  38. - /dev/nvme0n1
  39. - /dev/nvme1n1
  40. - /dev/nvme2n1
  41. node: my-node.example.com (13)
  42. status: Ready (14)
  43. ready: true (15)
  44. state: Ready (16)
1The LVM volume groups to be created on the cluster. Currently, only a single deviceClass is supported.
2The name of the LVM volume group to be created on the nodes.
3The nodes on which to create the LVM volume group. If the field is empty, all nodes are considered.
4A list of node selector requirements.
5A list of device paths which is used to create the LVM volume group. If this field is empty, all unused disks on the node will be used.
6The LVM thin pool configuration.
7The name of the thin pool to be created in the LVM volume group.
8The percentage of remaining space in the LVM volume group that should be used for creating the thin pool.
9The factor by which additional storage can be provisioned compared to the available storage in the thin pool.
10The status of the deviceClass.
11The status of the LVM volume group on each node.
12The list of devices used to create the LVM volume group.
13The node on which the deviceClass was created.
14The status of the LVM volume group on the node.
15This field is deprecated.
16The status of the LVMCluster.