OpenEBS LocalPV

Overview

OpenEBS Local PV is a CAS engine that can create persistent volumes using either local disks or host paths on the worker nodes. With this CAS engine, the performance will be equivalent to local disk or the file system (host path) on which the volumes are created. Many cloud native applications may not require advanced storage features like replication, snapshots and clones as they themselves can handles those. Such applications require access to a managed disks as persistent volumes.

Benefits of OpenEBS Local PVs

OpenEBS Local PVs are analogous to Kubernetes LocalPV. In addition, OpenEBS LocalPVs have the following benefits.

  • Local PVs are provisioned dynamically by OpenEBS Local PV provisioner. When the Local PV is provisioned with default StorageClass for the storage type hostpath, default host path is created dynamically and mapped to the Local PV. When the Local PV is provisioned with default StorageClass for the storage type device, one of the matching disks on the node is reserved and mapped to the Local PV.
  • Disks for Local PVs are managed by OpenEBS. Disk IO metric of managed disks can be obtained with help of NDM.
  • Provisioning of Local PVs is done through the Kubernetes standards. Admin users create storage class to enforce the storage type (disk or host path) and put additional control through RBAC policies.
  • By specifying the node selector in the application spec YAML , the application pods can be scheduled on the specified node. After the scheduling of application pod, OpenEBS Local PV will get deployed on the same Node. It guarantees that the pod is rescheduled on the same node to retain the access to data all the time.

How to use OpenEBS Local PVs

OpenEBS create two Storage Classes of Local PVs by default as openebs-hostpath and openebs-device. For simple provisioning of OpenEBS Local PV, use these default Storage Classes. More details can be find from here.

End users or developers will provision the OpenEBS Local PVs like any other PV, by creating a PVC using a storage class provided by the admin user.

OpenEBS Local PV based on device

Admin user creates a storage class for the StorageType as device Local PV by using the following annotations.

  1. openebs.io/cas-type: local
  2. cas.openebs.io/config: |
  3. - name: StorageType
  4. value: "device"
  5. - name: FSType
  6. value: ext4
  7. provisioner: openebs.io/local

When a PVC is invoked using the above storage class, OpenEBS Local PV Provisioner uses NDM operator and reserves a matching disk from the worker node where application pod is being scheduled. If no FSType is specified, Local PV provisioner will use default filesystem as ext4 to format the block device.

For provisioning Local PV using the block devices attached to the nodes, the block devices should be in one of the following states:

  • User has attached the block device, formatted and mounted them. This means disk is already formatted and mounted the disk on the worker node.

    • For Example: Local SSD in GKE.
  • User has attached the block device, un-formatted and not mounted. This means disk is attached on the worker node without any file system.

    • For Example: GPD in GKE.
  • User has attached the block device, but device has only device path and no dev links.

    • For Example: VM with VMDK disks or AWS node with EBS.

OpenEBS Local PV based on hostpath

Admin user creates a storage class for the StorageType as hostpath Local PV by using the following annotations

  1. openebs.io/cas-type: local
  2. cas.openebs.io/config: |
  3. - name: BasePath
  4. value: "/var/openebs/local"
  5. - name: StorageType
  6. value: "hostpath"
  7. provisioner: openebs.io/local

When a PVC is invoked using the above storage class, OpenEBS Local PV provisioner uses NDM operator and creates a new sub directory inside the BasePath and maps it to the PV.

Note: If default Basepath needs to be changed by mentioning different hostpath, then the specified hostpath(directory) must be present of the Node.

When to use OpenEBS Local PVs

  • High performance is needed by those applications which manage their own replication, data protection and other features such as snapshots and clones.
  • When local disks need to be managed dynamically and monitored for impending notice of them going bad.

When not to use OpenEBS Local PVs

  • When applications expect replication from storage.

  • When the volume size may need to be changed dynamically but the underlying disk is not resizable.

Limitations (or Roadmap items ) of OpenEBS Local PVs

  • Size of the Local PV cannot be increased dynamically. LVM type of functionality inside Local PVs is a potential feature in roadmap.
  • Disk quotas are not enforced by Local PV. An underlying disk or hostpath can have more data than requested by a PVC or storage class. Enforcing the capacity is a roadmap feature.
  • Enforce capacity and PVC resource quotas on the local disks or host paths.
  • SMART statistics of the managed disks is also a potential feature in roadmap.

See Also:

OpenEBS Architecture

Understanding NDM

Local PV User Guide

See A