OpenEBS Local PV User Guide

OpenEBS configuration flow

A local PV represents a mounted local storage device such as a disk or a hostpath (or subpath) directory. Local PVs are an extension to hostpath volumes, but are more secure.

OpenEBS Dynamic Local PV provisioner will help provisioning the Local PVs dynamically by integrating into the features offered by OpenEBS Node Storage Device Manager, and also offers the flexibility to either select a complete storage device or a hostpath (or subpath) directory.

Prerequisites

  • Kubernetes 1.12 or higher is required to use OpenEBS Local PV.
  • An unclaimed block device on worker node where application is going to schedule, for provisioning OpenEBS Local PV based device.

User Operations

Provision OpenEBS Local PV Based on hostpath

Provision OpenEBS Local PV Based on Device

Backup and Restore

Admin Operations

General Verification of Block Device Mount Status for Local PV Based on Device

Configure hostpath

User Operations

Provision OpenEBS Local PV based on hostpath

The simplest way to provision an OpenEBS Local PV based on hostpath is to use the default StorageClass which is created as part of latest operator YAML. The default StorageClass name for hostpath configuration is openebs-hostpath. The default hostpath is configured as /var/openebs/local.

The following is the sample deployment configuration of Percona application which is going to consume OpenEBS Local PV. For utilizing OpenEBS Local PV based on hostpath, use default StorageClass name as openebs-hostpath in the PVC spec of the Percona deployment.

  1. ---
  2. apiVersion: apps/v1beta1
  3. kind: Deployment
  4. metadata:
  5. name: percona
  6. labels:
  7. name: percona
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: percona
  13. template:
  14. metadata:
  15. labels:
  16. name: percona
  17. spec:
  18. securityContext:
  19. fsGroup: 999
  20. tolerations:
  21. - key: "ak"
  22. value: "av"
  23. operator: "Equal"
  24. effect: "NoSchedule"
  25. containers:
  26. - resources:
  27. limits:
  28. cpu: 0.5
  29. name: percona
  30. image: percona
  31. args:
  32. - "--ignore-db-dir"
  33. - "lost+found"
  34. env:
  35. - name: MYSQL_ROOT_PASSWORD
  36. value: k8sDem0
  37. ports:
  38. - containerPort: 3306
  39. name: percona
  40. volumeMounts:
  41. - mountPath: /var/lib/mysql
  42. name: demo-vol1
  43. volumes:
  44. - name: demo-vol1
  45. persistentVolumeClaim:
  46. claimName: demo-vol1-claim
  47. ---
  48. kind: PersistentVolumeClaim
  49. apiVersion: v1
  50. metadata:
  51. name: demo-vol1-claim
  52. spec:
  53. storageClassName: openebs-hostpath
  54. accessModes:
  55. - ReadWriteOnce
  56. resources:
  57. requests:
  58. storage: 5G
  59. ---
  60. apiVersion: v1
  61. kind: Service
  62. metadata:
  63. name: percona-mysql
  64. labels:
  65. name: percona-mysql
  66. spec:
  67. ports:
  68. - port: 3306
  69. targetPort: 3306
  70. selector:
  71. name: percona

Deploy the application using the following command. In this example, the above configuration YAML spec is saved as demo-percona-mysql-pvc.yaml

  1. kubectl apply -f demo-percona-mysql-pvc.yaml

The Percona application will be running on the OpenEBS local PV on hostpath. Verify if the application is running using the following command.

  1. kubectl get pod -n <namespace>

In this documentation we are using default namespace. Default namespace may or may not be specified in commands. So, the command will be:

  1. kubectl get pod

The output will be similar to the following.

NAME READY STATUS RESTARTS AGE percona-7b64956695-hs7tv 1/1 Running 0 21s

Verify PVC status using the following command.

  1. kubectl get pvc -n <namespace>

The output will be similar to the following.

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo-vol1-claim Bound pvc-2e4b123e-88ff-11e9-bc28-42010a8001ff 5G RWO openebs-hostpath 28s

Verify PV status using the following command.

  1. kubectl get pv

The output will be similar to the following.

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-2e4b123e-88ff-11e9-bc28-42010a8001ff 5G RWO Delete Bound default/demo-vol1-claim openebs-hostpath 22s

Provision OpenEBS Local PV Based on Device

The simplest way to provision an OpenEBS Local PV based on device is to use the default StorageClass for OpenEBS Local PV based of device which is created as part of latest operator YAML. The default StorageClass name for OpenEBS Local PV based on device configuration is openebs-device.

The following is the sample deployment configuration of Percona application which is going to consume OpenEBS Local PV based on device. For utilizing default OpenEBS Local PV based on device, use default StorageClass name as openebs-device in the PVC spec of the Percona deployment.

  1. ---
  2. apiVersion: apps/v1beta1
  3. kind: Deployment
  4. metadata:
  5. name: percona
  6. labels:
  7. name: percona
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. name: percona
  13. template:
  14. metadata:
  15. labels:
  16. name: percona
  17. spec:
  18. securityContext:
  19. fsGroup: 999
  20. tolerations:
  21. - key: "ak"
  22. value: "av"
  23. operator: "Equal"
  24. effect: "NoSchedule"
  25. containers:
  26. - resources:
  27. limits:
  28. cpu: 0.5
  29. name: percona
  30. image: percona
  31. args:
  32. - "--ignore-db-dir"
  33. - "lost+found"
  34. env:
  35. - name: MYSQL_ROOT_PASSWORD
  36. value: k8sDem0
  37. ports:
  38. - containerPort: 3306
  39. name: percona
  40. volumeMounts:
  41. - mountPath: /var/lib/mysql
  42. name: demo-vol1
  43. volumes:
  44. - name: demo-vol1
  45. persistentVolumeClaim:
  46. claimName: demo-vol1-claim
  47. ---
  48. kind: PersistentVolumeClaim
  49. apiVersion: v1
  50. metadata:
  51. name: demo-vol1-claim
  52. spec:
  53. storageClassName: openebs-device
  54. accessModes:
  55. - ReadWriteOnce
  56. resources:
  57. requests:
  58. storage: 5G
  59. ---
  60. apiVersion: v1
  61. kind: Service
  62. metadata:
  63. name: percona-mysql
  64. labels:
  65. name: percona-mysql
  66. spec:
  67. ports:
  68. - port: 3306
  69. targetPort: 3306
  70. selector:
  71. name: percona

In this example, the above configuration YAML spec is saved as demo-percona-mysql-pvc.yaml

Note:

  • The Local PV volume will be provisioned with volumeMode as filesystem by default. The supported filesystems are ext4 and xfs. This means Local PV volume will be created and formatted with one of these filesystem. If no filesystem is specified, by default Kubelet will format the BlockDevice as ext4. More details can be found here.

  • With OpenEBS 1.5 version, Local PV volume has Raw Block Volume support. The Raw Block Volume support can be added to the path spec.volumeMode as Block in the Persistent Volume spec. Below is the sample PVC spec for provisioning Local PV Raw Block Volume.

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. metadata:
    4. name: my-pvc
    5. spec:
    6. accessModes:
    7. - ReadWriteOnce
    8. volumeMode: Block
    9. storageClassName: openebs-device
    10. resources:
    11. requests:
    12. storage: 10Gi

Run the following command to provision application using the above saved YAML spec.

  1. kubectl apply -f demo-percona-mysql-pvc.yaml

The Percona application now runs using the OpenEBS local PV volume on device. Verify the application running status using the following command.

  1. kubectl get pod

The output will be similar to the following.

NAME READY STATUS RESTARTS AGE percona-7b64956695-lnzq4 1/1 Running 0 46s

Verify the PVC status using the following command.

  1. kubectl get pvc

The output will be similar to the following.

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo-vol1-claim Bound pvc-d0ea3a06-88fe-11e9-bc28-42010a8001ff 5G RWO openebs-device 38s

Verify the PV status using the following command.

  1. kubectl get pv

The output will be similar to the following.

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-d0ea3a06-88fe-11e9-bc28-42010a8001ff 5G RWO Delete Bound default/demo-vol1-claim openebs-device 35s

Backup and Restore

OpenEBS volume can be backed up and restored along with the application using velero plugin. It helps the user for backing up the OpenEBS volumes to a third party storage location and then restore the data whenever it is needed. The steps for taking backup and restore are as follows.

Prerequisites

  • Mount propagation feature has to be enabled on Kubernetes, otherwise the data written from the pods will not visible in the restic daemonset pod on the same node. It is enabled by default from Kubernetes version 1.12. More details can be get from here.
  • Latest tested Velero version is 1.1.0.
  • Create required storage provider configuration to store the backup data.
  • Create required storage class on destination cluster.
  • Annotate required application pod that contains a volume to back up.
  • Add a common label to all the resources associated to the application that you want to backup. For example, add an application label selector in associated components such as PVC,SVC etc.

Overview

Velero is a utility to back up and restore your Kubernetes resource and persistent volumes.

To take backup and restore of OpenEBS Local PV, configure Velero with restic and use velero backup command to take the backup of application with OpenEBS Local PV which invokes restic internally and copies the data from the given application including the entire data from the associated persistent volumes in that application and backs it up to the configured storage location such as S3 or Minio.

The following are the step by step procedure for taking backup and restore of application with OpenEBS Local PV.

  1. Install Velero
  2. Annotate Application Pod
  3. Creating and Managing Backups
  4. Steps for Restore

Install Velero (Formerly known as ARK)

Follow the instructions at Velero documentation to install and configure Velero and follow restic integration documentation for setting up and usage of restic support.

While installing Velero plugin in your cluster, specify --use-restic to enable restic support.

Verify using the following command if restic pod and Velero pod are running after installing velero with restic support.

  1. kubectl get pod -n velero

The following is an example output in a single node cluster.

  1. NAME READY STATUS RESTARTS AGE
  2. restic-ksfqr 1/1 Running 0 21s
  3. velero-84b9b44d88-gn8dk 1/1 Running 0 25m

Annotate Application Pod

Run the following to annotate each application pod that contains a volume to back up.

  1. kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...

In the above example command, where the volume names are the names of the volumes specified in the application pod spec.

Example Spec:

If application spec contains the volume name as mentioned below, then use volume name as storage in the below command.

  1. spec:
  2. # Refer to the PVC created earlier
  3. volumes:
  4. - name: storage
  5. persistentVolumeClaim:
  6. # Name of the PVC created earlier
  7. claimName: minio-pv-claim
  8. containers:
  9. - name: minio

And if the application pod name is minio-deployment-7fc6cdfcdc-8r84h , use the following command to annotate the application.

  1. kubectl -n default annotate pod/minio-deployment-7fc6cdfcdc-p6hlq backup.velero.io/backup-volumes=storage

Creating and Managing Backups

Take the backup using the below command.

  1. velero backup create <backup_name> -l <app-label-selector>

Example:

  1. velero backup create hostpathbkp1 -l app=minio

The above command shown in example will take backup of all resources which has a common label app=minio.

Note: You can use --selector as a flag in backup command to filter specific resources or use a combo of --include-namespaces and --exclude-resources to exclude specific resources in the specified namespace. More details can be read from here.

After taking backup, verify if backup is taken successfully by using following command.

  1. velero backup get

The following is a sample output.

  1. NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
  2. hostpathbkp1 Completed 2019-06-14 14:57:01 +0530 IST 29d default app=minio

You will get more details about the backup using the following command.

  1. velero backup describe hostpathbkp1

Once the backup is completed you should see the Phase marked as Completed in the output of above command.

Steps for Restore

Velero backup can be restored onto a new cluster or to the same cluster. An OpenEBS PV with the same name as the original PV will be created and application will run using the restored OpenEBS volume.

Prerequisites

  • Ensure same namespace, StorageClass configuration and PVC configuration of the source PVC must be created in your destination cluster.
  • Ensure at least one unclaimed block device is present on the destination to restore OpenEBS Local PV provisioned with device.

On the target cluster, restore the application using the below command.

  1. velero restore create --from-backup <backup-name>

Example:

  1. velero restore create --from-backup hostpathbkp1

The following can be used to obtain the restore job status.

  1. velero restore get

The following is an example output. Once the restore is completed you should see the status marked as Completed.

  1. NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
  2. hostpathbkp1-20190614151818 hostpathbkp1 Completed 34 0 2019-06-14 15:18:19 +0530 IST <none>

Verify application status using the following command

  1. kubectl get pod -n <namespace>

Verify PVC status using the following command.

  1. kubectl get pvc -n <namespace>

Admin Operations

General Verification of Block Device Mount Status for Local PV Based on Device

The application can be provisioned using OpenEBS Local PV based on device. For provisioning OpenEBS Local PV using the block devices attached to the nodes, the block devices should be in one of the following states.

  • User has attached the block device, formatted and mounted them.
    • For Example: Local SSD in GKE.
  • User has attached the block device, un-formatted and not mounted them.
    • For Example: GPD in GKE.
  • User has attached the block device, but device has only device path and no dev links.
    • For Example: VM with VMDK disks or AWS node with EBS

Configure hostpath

The default hostpath is configured as /var/openebs/local, which can either be changed during the OpenEBS operator install by passing in the OPENEBS_IO_BASE_PATH ENV parameter to the OpenEBS Local PV dynamic provisioner deployment spec or via the StorageClass. The example for both approaches are shown below.

Using OpenEBS operator YAML

The example of changing the ENV variable to the Local PV dynamic provisioner deployment spec in the operator YAML. This has to be done before applying openebs operator YAML file.

  1. name: OPENEBS_IO_BASE_PATH
  2. value: “/mnt/”

Using StorageClass

The Example for changing the Basepath via StorageClass

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-hostpath
  5. annotations:
  6. openebs.io/cas-type: local
  7. cas.openebs.io/config: |
  8. - name: BasePath
  9. value: "/mnt/"
  10. provisioner: openebs.io/local
  11. volumeBindingMode: WaitForFirstConsumer
  12. reclaimPolicy: Delete

Apply the above StorageClass configuration after making the necessary changes and use this StorageClass name in the corresponding PVC specification to provision application on OpenEBS Local PV based on the customized hostpath.

Verify if the StorageClass is having the updated hostpath using the following command and verify the value is set properly for the BasePath config value.

  1. kubectl describe sc openebs-hostpath

Note: If you are using a mount path of an external device as Basepath for the default hostpath Storage Class, then you must add the corresponding block device path under exclude filter so that NDM will not select the particular disk for BD creation. For example, /dev/sdb is mounted as /mnt/ext_device, and if you are using /mnt/ext_device as Basepath in default StorageClass openebs-hostpath, then you must add /dev/sdb under exclude filter of NDM configuration. See here for customizing the exclude filter in NDM configuration.

See Also:

Understand OpenEBS Local PVs

Node Disk Manager