cStor User Guide

This user guide section provides the operations need to performed by the User and the Admin for configuring cStor related tasks.

User operations

Provisioning a cStor volume

Monitoring a cStor Volume

Backup and Restore

Snapshot and Clone of a cStor Volume

Upgrading the software version of a cStor volume

Deleting a cStor Volume

Patching pool deployment by adding or modifying resource limit and requests

Admin operations

Creating cStor storage pools

Setting Pool Policies

Creating cStor storage classes

Setting Storage Polices

Monitoring a cStor Pool

Setting Performance Tunings

Upgrade the software version of a cStor pool

Expanding cStor pool to a new node

Expanding size of a cStor pool instance on a node by expanding the size of cloud disks

Expanding size of a cStor pool instance on a node by add physical/virtual disks to a pool instance

Expanding the cStor Volume Capacity

User Operations

Provisioning a cStor volume

For provisioning a cStor Volume, it requires a cStor Storage Pool and a StorageClass. The configuration and verification of a cStor Storage pool can be checked from here. The configuration and verification of a StorageClass can be checked from here.

Use a similar PVC spec or volumeClaimTemplate to use a StorageClass that is pointing to a pool with real disks. Consider the following parameters while provisioning OpenEBS volumes on real disks.

AccessModes: cStor provides iSCSI targets, which are appropriate for RWO (ReadWriteOnce) access mode and is suitable for all types of databases. For webscale applications like WordPress or any for any other NFS needs, you need RWM (ReadWriteMany) access mode. For RWM, you need NFS provisioner to be deployed along with cStor. See how to provision RWM PVC with OpenEBS .

Size: cStor supports thin provisioning by default, which means you can request any size of the volume through the PVC and get it provisioned. Resize of the volume is not fully supported through the OpenEBS control plane in the current release (OpenEBS 0.9.0) and is active development, see roadmap for more details. Hence it is recommended to give good amount of buffer to the required size of the volume so that you don’t need to resize immediately or in the very short time period.

The following shows the example PVC configuration for a Deployment and a StatefulSet application which uses a configured StorageClass to provision a cStor Volume. The provided StorageClass name will contain the StoragePoolClaim name and the cStor Volume will provisioned on a StoragePool associated to the StroagePoolClaim.

Example configuration for requesting OpenEBS volume for a Deployment

  1. kind: PersistentVolumeClaim
  2. apiVersion: v1
  3. metadata:
  4. name: cstor-pvc-mysql-large
  5. spec:
  6. storageClassName: openebs-cstor-pool1-3-replicas
  7. accessModes:
  8. - ReadWriteOnce
  9. resources:
  10. requests:
  11. storage: 500Gi

Example configuration for requesting OpenEBS volume for a StatefulSet

  1. spec:
  2. volumeClaimTemplates:
  3. - metadata:
  4. name: elasticdb-vol-openebs
  5. spec:
  6. accessModes:
  7. - ReadWriteOnce
  8. resources:
  9. requests:
  10. storage: 500Gi
  11. storageClassName: openebs-cstor-pool1-1-replica

Monitoring a cStor Volume

By default the VolumeMonitor is set to ON in the cStor StorageClass. Volume metrics are exported when this parameter is set to ON. Following metrics are supported by cStor as of the current release.

  1. openebs_actual_used # Actual volume size used
  2. openebs_connection_error_total # Total no of connection errors
  3. openebs_connection_retry_total # Total no of connection retry requests
  4. openebs_degraded_replica_count # Total no of degraded/ro replicas
  5. openebs_healthy_replica_count # Total no of healthy replicas
  6. openebs_logical_size # Logical size of volume
  7. openebs_parse_error_total # Total no of parsing errors
  8. openebs_read_block_count # Read Block count of volume
  9. openebs_read_time # Read time on volume
  10. openebs_reads # Read Input/Outputs on Volume
  11. openebs_sector_size # sector size of volume
  12. openebs_size_of_volume # Size of the volume requested
  13. openebs_total_read_bytes # Total read bytes
  14. openebs_total_replica_count # Total no of replicas connected to cas
  15. openebs_total_write_bytes # Total write bytes
  16. openebs_volume_status # Status of volume: (1, 2, 3, 4) = {Offline, Degraded, Healthy, Unknown}
  17. openebs_volume_uptime # Time since volume has registered
  18. openebs_write_block_count # Write Block count of volume
  19. openebs_write_time # Write time on volume
  20. openebs_writes # Write Input/Outputs on Volume

Grafana charts can be built for the above Prometheus metrics. Some metrics OpenEBS volumes are available automatically at Kubera when you connect the Kubernetes cluster to it. See an example screenshot below.

OpenEBS configuration flow

Snapshot and Clone of a cStor Volume

An OpenEBS snapshot is a set of reference markers for data at a particular point in time. A snapshot act as a detailed table of contents, with accessible copies of data that user can roll back to the required point of instance. Snapshots in OpenEBS are instantaneous and are managed through kubectl.

During the installation of OpenEBS, a snapshot-controller and a snapshot-provisioner are setup which assist in taking the snapshots. During the snapshot creation, snapshot-controller creates VolumeSnapshot and VolumeSnapshotData custom resources. A snapshot-provisioner is used to restore a snapshot as a new Persistent Volume(PV) via dynamic provisioning.

In this section the steps for the creation, clone and deletion a snapshot is provided.

Creating a cStor Snapshot

The following steps will help you to create a snapshot of a cStor volume. For creating the snapshot, you need to create a YAML specification and provide the required PVC name into it. The only prerequisite check is to be performed is to ensure that there is no stale entries of snapshot and snapshot data before creating a new snapshot.

  • Copy the following YAML specification into a file called snapshot.yaml.

    1. apiVersion: volumesnapshot.external-storage.k8s.io/v1
    2. kind: VolumeSnapshot
    3. metadata:
    4. name: snapshot-cstor-volume
    5. namespace: <Source_PVC_namespace>
    6. spec:
    7. persistentVolumeClaimName: cstor-vol1-claim
  • Edit the snapshot.yaml which is created in previous step to update

    • name :- Name of snapshot which is going to create
    • namespace :- Namespace of source PVC
    • persistentVolumeClaimName :- Source PVC which you are going to take the snapshot.
  • Run the following command to create the snapshot of the provided PVC.

    1. kubectl apply -f snapshot.yaml -n <namespace>

    The above command creates a snapshot of the cStor volume and two new CRDs. To list the snapshots, use the following command

    1. kubectl get volumesnapshot
    2. kubectl get volumesnapshotdata

    Note: All cStor snapshots should be created in the same namespace of source PVC.

Cloning a cStor Snapshot

Once the snapshot is created, restoration from a snapshot or cloning the snapshot is done through a two step process. First create a PVC that refers to the snapshot and then use the PVC to create a new PV. This PVC must refer to a storage class called openebs-snapshot-promoter.

  • Copy the following YAML specification into a file called snapshot_claim.yaml.

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. metadata:
    4. name: vol-claim-cstor-snapshot
    5. namespace: <Source_PVC_namespace>
    6. annotations:
    7. snapshot.alpha.kubernetes.io/snapshot: snapshot-cstor-volume
    8. spec:
    9. storageClassName: openebs-snapshot-promoter
    10. accessModes: [ "ReadWriteOnce" ]
    11. resources:
    12. requests:
    13. storage: 4G
  • Edit the YAML file to update

    • name :- Name of the clone PVC
    • namespace :- Same namespace of source PVC
    • The annotation snapshot.alpha.kubernetes.io/snapshot :- Name of the snapshot
    • storage :- The size of the volume being cloned or restored. This should be same as source PVC.

    Note: Size and namespace should be same as the original PVC from which the snapshot was created.

  • Run the following command to create a cloned PVC.

    1. kubectl apply -f snapshot_claim.yaml -n <namespace>
  • Get the details of newly created PVC for the snapshot.

    1. kubectl get pvc -n <namespace>
  • Mount the above PVC in an application YAML to browse the data from the clone.

Note: For deleting the corresponding source volume, it is mandatory to delete the associated clone volumes of this source volume. The source volume deletion will fail if any associated clone volume is present on the cluster.

Deleting a cStor Snapshot

Delete the snapshot using the kubectl command by providing the the same YAML specification that was used to create the snapshot.

  1. kubectl delete -f snapshot.yaml -n <namespace>

This will not affect any PersistentVolumeClaims or PersistentVolumes that were already provisioned using the snapshot. On the other hand, deleting any PersistentVolumeClaims or PersistentVolumes that were provisioned using the snapshot will not delete the snapshot from the OpenEBS backend.

Backup and Restore

OpenEBS volume can be backed up and restored along with the application using OpenEBS velero plugin. It helps the user for backing up the OpenEBS volumes to third party storage location and restore the data whenever it is required. The steps for taking backup and restore are as follows.

Prerequisites

  • Latest tested Velero version is 1.0.0.
  • Create required storage provider configuration to store the backup.
  • Create required OpenEBS storage pools and storage classes on destination cluster.
  • Add a common label to all the resources associated to the application that you want to backup. For example, add an application label selector in associated components such as PVC,SVC etc.

Install Velero (Formerly known as ARK)

Follow the instructions at Velero documentation to install and configure Velero.

Steps for Backup

Velero is a utility to back up and restore your Kubernetes resource and persistent volumes.

To do backup/restore of OpenEBS cStor volumes through Velero utility, you need to install and configure OpenEBS velero-plugin. If you are using OpenEBS velero-plugin then velero backup command invokes velero-plugin internally and takes a snapshot of cStor volume data and send it to remote storage location as mentioned in 06-volumesnapshotlocation.yaml. The configuration of 06-volumesnapshotlocation.yaml can be done in the next section.

Configure Volumesnapshot Location

To take a backup of cStor volume through Velero, configure VolumeSnapshotLocation with provider openebs.io/cstor-blockstore. Sample YAML file for volumesnapshotlocation can be found at 06-volumesnapshotlocation.yaml from the openebs/velero-plugin repo.

Sample spec for configuring volume snapshot location.

  1. apiVersion: velero.io/v1
  2. kind: VolumeSnapshotLocation
  3. metadata:
  4. name: <LOCATION_NAME>
  5. namespace: velero
  6. spec:
  7. provider: openebs.io/cstor-blockstore
  8. config:
  9. bucket: <YOUR_BUCKET>
  10. prefix: <PREFIX_FOR_BACKUP_NAME>
  11. backupPathPrefix: <PREFIX_FOR_BACKUP_PATH>
  12. provider: <GCP_OR_AWS>
  13. region: <AWS_REGION or minio>

The following are the definition for each parameters.

  • name : Provide a snapshot location name. Eg: gcp-default
  • bucket : Provide the bucket name created on the cloud provider. Eg: gcpbucket
  • prefix : Prefix for backup name. Eg: cstor
  • backupPathPrefix: Prefix for backup path. Eg: newbackup. This should be same as prefix mentioned in 05-backupstoragelocation.yaml for keeping all backups at same path. For more details , please refer here.
  • Provider : Provider name. Eg: gcp or aws
  • region : Provide region name if cloud provider is AWS or use minio if it is a MinIO bucket.

For configuring parameters for AWS or MinIO in volumesnapshotlocation, refer here for more details.

Example for GCP configuration:

  1. ---
  2. apiVersion: velero.io/v1
  3. kind: VolumeSnapshotLocation
  4. metadata:
  5. name: gcp-default
  6. namespace: velero
  7. spec:
  8. provider: openebs.io/cstor-blockstore
  9. config:
  10. bucket: gcpbucket
  11. prefix: cstor
  12. backupPathPrefix: newbackup
  13. provider: gcp

After creating the 06-volumesnapshotlocation.yaml with the necessary details, apply the YAML using the following command.

  1. kubectl apply -f 06-volumesnapshotlocation.yaml

Currently supported volumesnapshotlocations for velero-plugin are AWS, GCP and MinIO.

Managing Backups

Take the backup using the below command. Here, you need to get the label of the application.

  1. velero backup create <backup-name> -l app=<app-label-selector> --snapshot-volumes --volume-snapshot-locations=<SNAPSHOT_LOCATION>

Note: SNAPSHOT_LOCATION should be the same as you configured in the 06-volumesnapshotlocation.yaml. You can use --selector as a flag in backup command to filter specific resources or use a combo of --include-namespaces and --exclude-resources to exclude specific resources in the specified namespace. More details can be read from here.

Example:

  1. velero backup create new1 -l app=minio --snapshot-volumes --volume-snapshot-locations=gcp-default

The above command shown in example will take backup of all resources which has a common label app=minio.

After taking backup, verify if backup is taken successfully by using following command.

  1. velero get backup

The following is a sample output.

  1. NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
  2. new1 Completed 2019-06-13 12:44:26 +0530 IST 29d default app=minio

From the example mentioned in configure-volumesnapshotlocation, backup files of cStor volumes will be stored at gcpbucket/newbackup/backups/new1/cstor-<pv_name>-new1

You will get more details about the backup using the following command.

  1. velero backup describe <backup_name>

Example:

  1. velero backup describe new1

Once the backup is completed you should see the Phase marked as Completed and Persistent Volumes field shows the number of successful snapshots.

Steps for Restore

Velero backup can be restored onto a new cluster or to the same cluster. An OpenEBS PVC with the same name as the original PVC will be created and application will run using the restored OpenEBS volume.

Prerequisites

  • Create the same namespace and StorageClass configuration of the source PVC in your target cluster.
  • If the restoration happens on same cluster where Source PVC was created, then ensure that application and its corresponding components such as Service, PVC,PV and cStorVolumeReplicas are deleted successfully.

On the target cluster, restore the application using the below command.

  1. velero restore create <restore-name> --from-backup <backup-name> --restore-volumes=true

Example:

  1. velero restore create new_restore --from-backup new1 --restore-volumes=true

The restoration job details can be obtained using the following command.

  1. velero restore get

Once the restore job is completed you should see the corresponding restore job is marked as Completed.

Note: After restoring, you need to set targetip for the volume in pool pod. Target IP of the PVC can be find from running the following command.

  1. kubectl get svc -n <openebs_installed namespace>

Output will be similar to the following

  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. admission-server-svc ClusterIP 10.4.40.66 <none> 443/TCP 9h
  3. maya-apiserver-service ClusterIP 10.4.34.15 <none> 5656/TCP 9h
  4. pvc-9b43e8a6-93d2-11e9-a7c6-42010a800fc0 ClusterIP 10.4.43.221 <none> 3260/TCP,7777/TCP,6060/TCP,9500/TCP 8h

In this case, 10.4.43.221 is the service IP of the PV. This target ip is required after login to the pool pod. The steps for updating target ip is as follows:

  1. kubectl exec -it <POOL_POD> -c cstor-pool -n openebs -- bash

After entering the cstor-pool container, get the dataset name from the output of following command.

  1. zfs list | grep <pv_name>

Update the targetip for the corresponding dataset using the following command.

  1. zfs set io.openebs:targetip=<PVC SERVICE IP> <POOL_NAME/VOLUME_NAME>

After executing the above command, exit from the container session.

Verify application status using the following command. Now the application should be running.

  1. kubectl get pod -n <namespace>

Verify PVC status using the following command.

  1. kubectl get pvc -n <namespace>

Scheduling backups

Using velero schedule command, periodic backups are taken.

In case of velero-plugin, this periodic backups are incremental backups which saves storage space and backup time. To restore periodic backup with velero-plugin, refer here for more details. The following command will schedule the backup as per the cron time mentioned .

  1. velero schedule create <backup-schedule-name> --schedule "0 * * * *" --snapshot-volumes volume-snapshot-locations=<SNAPSHOT_LOCATION> -l app=<app-label-selector>

Note: SNAPSHOT_LOCATION should be the same as you configured by using 06-volumesnapshotlocation.yaml

Get the details of backup using the following command

  1. velero backup get

During the first backup iteration of a schedule, full data of the volume will be backed up. After taking the full backup in the first schedule, then it will take the incremental backup as part of the next iteration.

Restore from a Scheduled Backup

Since the backups taken are incremental for a schedule, order of restoring data is important. You need to restore data in the order of the backups created.

For example, below are the available backups for a schedule.

  1. NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
  2. sched-20190513104034 Completed 2019-05-13 16:10:34 +0530 IST 29d gcp <none>
  3. sched-20190513103534 Completed 2019-05-13 16:05:34 +0530 IST 29d gcp <none>
  4. sched-20190513103034 Completed 2019-05-13 16:00:34 +0530 IST 29d gcp <none>

Restoration of data need to be done in following way:

  1. velero restore create --from-backup sched-20190513103034 --restore-volumes=true
  2. velero restore create --from-backup sched-20190513103534 --restore-volumes=true
  3. velero restore create --from-backup sched-20190513104034 --restore-volumes=true

Deletion of Backups

To delete a single backup which is not created from scheduled backup, use the following command.

  1. velero backup delete <backup_name>

Note: the deletion of backup will not delete the snapshots created as part of backup from the cStor Storage pool. This can be deleted by following manual steps .

  1. First verify the cStor backups created for corresponding cStor volume. To obtain the cStor backups of cStor volume, use the following command by providing the corresponding backup name.

    1. kubectl get cstorbackups -n <backup_namespace> -l openebs.io/backup=<backup_name> -o=jsonpath='{range .items[*]}{.metadata.labels.openebs\.io/persistent-volume}{"\n"}{end}'
  2. Delete the corresponding cStor backups using the following command.

    1. kubectl delete cstorbackups -n <backup_namespace> -l openebs.io/backup=<backup_name>
  3. To delete the cStor backup completed jobs, use the following command.

    1. kubectl delete cstorbackupcompleted -n <backup_namespace> -l openebs.io/backup=<backup_name>

The deletion of Velero backup schedule doesn’t destroy the backup created during the schedule. User need to delete a scheduled backup manually. Use the above steps to delete the scheduled backups.

Upgrading the software version of a cStor volume

The steps are mentioned in Upgrade section. For upgrading cStorVolume, ensure that cStor Pool image is support this cStor volume image. It should also recommended to upgrade the corresponding pool before upgrading cStor volume. The steps for upgrading the cStor volume can be find from here.

Deleting a cStor Volume

The cStor volume can be deleted by deleting the corresponding PVC. This can be done by using the following command.

  1. kubectl delete pvc <PVC_name> -n <PVC_namespace>

The successful deletion of a cStor volume can be verified by running the following commands and ensure there is no entries of particular volume exists as part of the output.

Verify the PVC is deleted successfully using the following command.

  1. kubectl get pvc -n <namespace>

Verify the PV is deleted successfully using the following command.

  1. kubectl get pv

Verify the cStorVolumeReplica(CVR) is deleted successfully using the following command.

  1. kubectl get cvr -n <openebs_installed_namespace>

Verify corresponding cStor Volume target also deleted successfully using the following command.

  1. kubectl get pod -n <openebs_installed_namespace> | grep <pvc_name>

Patching pool deployment by adding or modifying resource limit and requests

  1. Create a patch file called “patch.yaml” and add the following content to it. You can change the values based on the Node configuration. Recommended values are 4Gi for limits and 2Gi for requests.

    1. spec:
    2. template:
    3. spec:
    4. containers:
    5. - name: cstor-pool
    6. resources:
    7. limits:
    8. memory: 4Gi
    9. requests:
    10. memory: 2Gi
  2. Get the pool deployment using the following command:

    1. kubectl get deploy -n openebs
  3. Patch the corresponding pool deployment using the following command.

    1. kubectl patch deployment <pool_deployment_name> --patch "$(cat patch.yaml)" -n <openebs_installed_namespace>

    Eg:

    1. kubectl patch deployment <pool_deployment_name> --patch "$(cat patch.yaml)" -n openebs

    Note: After patching, the existing pool pod will be terminated and a new pool pod will be created. Repeat the same process for other deployments of the same pool as well one by one once new pool pod is created.

Admin Operations

Creating cStor Storage Pools

The cStorStoragePool can be created by specifying the blockDeviceList. The following section will describe the steps in detail.

Create a cStorPool by specifying blockDeviceList

Overview

  1. Get the details of blockdevices attached in the cluster.
  2. Create a StoragePoolClaim configuration YAML and update the required details.
  3. Apply the StoragePoolClaim configuration YAML to create the cStorStoragePool.

Step1:

Get all the blockdevices attached in the cluster with the following command. Modify the following command with appropriate namespace where the OpenEBS is installed. The default namespace where OpenEBS is getting installed is openebs.

  1. kubectl get blockdevice -n <openebs_namespace>

Example:

  1. kubectl get blockdevice -n openebs

The output will be similar to the following.

NAME SIZE CLAIMSTATE STATUS AGE blockdevice-1c10eb1bb14c94f02a00373f2fa09b93 42949672960 Unclaimed Active 1m blockdevice-77f834edba45b03318d9de5b79af0734 42949672960 Unclaimed Active 1m blockdevice-936911c5c9b0218ed59e64009cc83c8f 42949672960 Unclaimed Active 1m

The details of blockdevice can be get using the following command.

  1. kubectl describe blockdevice <blockdevicename> -n <openebs_namespace>

Example:

  1. kubectl describe blockdevice blockdevice-77f834edba45b03318d9de5b79af0734 -n openebs

From the output, you will get the hostname and other blockdevice details such as State,Path,Claim State,Capacity etc.

Note: Identify block devices which are unclaimed, unmounted on node and does not contain any filesystem. The above command will help to find these information. More information about the disk mount status on node can be read from here.

Step2:

Create a StoragePoolClaim configuration YAML file called cstor-pool1-config.yaml with the following content. In the following YAML, PoolResourceRequests value is set to 2Gi and PoolResourceLimits value is set to 4Gi. The resources will be shared for all the volume replicas that reside on a pool. The value of these resources can be 2Gi to 4Gi per pool on a given node for better performance. These values can be changed as per the Node configuration for better performance. Refer setting pool policies for more details on the pool policies applicable for cStor.

  1. #Use the following YAMLs to create a cStor Storage Pool.
  2. apiVersion: openebs.io/v1alpha1
  3. kind: StoragePoolClaim
  4. metadata:
  5. name: cstor-disk-pool
  6. annotations:
  7. cas.openebs.io/config: |
  8. - name: PoolResourceRequests
  9. value: |-
  10. memory: 2Gi
  11. - name: PoolResourceLimits
  12. value: |-
  13. memory: 4Gi
  14. spec:
  15. name: cstor-disk-pool
  16. type: disk
  17. poolSpec:
  18. poolType: striped
  19. blockDevices:
  20. blockDeviceList:
  21. - blockdevice-936911c5c9b0218ed59e64009cc83c8f
  22. - blockdevice-77f834edba45b03318d9de5b79af0734
  23. - blockdevice-1c10eb1bb14c94f02a00373f2fa09b93
  24. ---

In the above file, change the following parameters as required.

  • poolType

    This field represents how the data will be written to the disks on a given pool instance on a node. Supported values are striped, mirrored, raidz and raidz2.

    Note: In OpenEBS, the pool instance does not extend beyond a node. The replication happens at volume level but not at the pool level. See volumes and pools relationship in cStor for a deeper understanding.

  • blockDeviceList

    Select the list of selected unclaimed blockDevice CRs which are unmounted and does not contain a filesystem in each participating nodes and enter them under blockDeviceList.

    To get the list of blockDevice CRs, use kubectl get blockdevice -n openebs.

    You must enter all selected blockDevice CRs manually together from the selected nodes.

    When the poolType = mirrored , ensure the blockDevice CRs selected from each node are in even number. The data is striped across mirrors. For example, if 4x1TB blockDevice are selected on node1, the raw capacity of the pool instance of cstor-disk-pool on node1 is 2TB.

    When the poolType = striped the number of blockDevice CRs from each node can be in any number, the data is striped across each blockDevice. For example, if 4x1TB blockDevices are selected on node1, the raw capacity of the pool instance of cstor-disk-pool on that node1 is 4TB.

    When the poolType = raidz, ensure that the number of blockDevice CRs selected from each node are like 3,5,7 etc. The data is written with parity. For example, if 3x1TB blockDevice are selected on node1, the raw capacity of the pool instance of cstor-disk-pool on node1 is 2TB. 1 disk will be used as a parity disk.

    When the poolType = raidz2, ensure that the number of blockDevice CRs selected from each node are like 6,8,10 etc. The data is written with dual parity. For example, if 6x1TB blockDevice are selected on node1, the raw capacity of the pool instance of cstor-disk-pool on node1 is 4TB. 2 disks will be used for parity.

The number of selected blockDevice CRs across nodes need not be the same. Unclaimed blockDevice CRs which are unmounted on nodes and does not contain any filesystem can be added to the pool spec dynamically as the used capacity gets filled up.

  • type

    This value can be either sparse or disk. If you are creating a sparse pool using the sparse disk based blockDevice which are created as part of applying openebs operator YAML, then choose type as sparse. For other blockDevices, choose type as disk.

Step3:

After the StoragePoolClaim configuration YAML spec is created, run the following command to create the pool instances on nodes.

  1. kubectl apply -f cstor-pool1-config.yaml

Verify cStor Pool configuration is created successfully using the following command.

  1. kubectl get spc

The following is an example output.

  1. NAME AGE
  2. cstor-disk 13s

Verify if cStor Pool is created successfully using the following command.

  1. kubectl get csp

The following is an example output.

  1. NAME ALLOCATED FREE CAPACITY STATUS TYPE AGE
  2. cstor-disk-4blm 77K 39.7G 39.8G Healthy striped 27s
  3. cstor-disk-4pfu 77K 39.7G 39.8G Healthy striped 26s
  4. cstor-disk-u1pn 77K 39.7G 39.8G Healthy striped 27s

Verify if cStor pool pods are running using the following command.

  1. kubectl get pod -n <openebs_installed_namespace> | grep -i <spc_name>

Example:

  1. kubectl get pod -n openebs | grep cstor-disk

Example Output:

  1. cstor-disk-4blm-5f86b8c6b-b24cq 3/3 Running 0 113s
  2. cstor-disk-4pfu-58b8c77655-6wpl6 3/3 Running 0 112s
  3. cstor-disk-u1pn-6dffdf6d7f-j7fsx 3/3 Running 0 113s

If all pods are showing are running, then you can use these cStor pools for creating cStor volumes.

Note: The cStor pool can be horizontally scale up on new OpenEBS Node by editing the corresponding pool configuration YAML with the new disks name under blockDeviceList . More details can be found here. If you find any issues, check common issues added in troubleshooting section.

Setting Pool Policies

This section captures the policies supported for cStorPools in StoragePoolClaim under cas.openebs.io/config in the name and value pair format.

PoolResourceLimits Policy

This feature allow you to set the limits on memory and cpu for pool pods. The resource and limit value should be in the same format as expected by Kubernetes. The name of SPC can be changed if you need.

  1. apiVersion: openebs.io/v1alpha1
  2. kind: StoragePoolClaim
  3. metadata:
  4. name: cstor-disk
  5. annotations:
  6. cas.openebs.io/config: |
  7. - name: PoolResourceLimits
  8. value: |-
  9. memory: 4Gi
  10. spec:
  11. name: cstor-disk
  12. type: disk

PoolResourceRequests Policy

This feature allow you to specify resource requests that need to be available before scheduling the containers. If not specified, the default values are used. The name of SPC can be changed if you need.

  1. apiVersion: openebs.io/v1alpha1
  2. kind: StoragePoolClaim
  3. metadata:
  4. name: cstor-disk
  5. annotations:
  6. cas.openebs.io/config: |
  7. - name: PoolResourceRequests
  8. value: |-
  9. memory: 2Gi
  10. spec:
  11. name: cstor-disk
  12. type: disk

Tolerations

cStor pool pods can be ensure that pods are not scheduled onto inappropriate nodes. This can be acheived using taint and tolerations method. If Nodes are tainted to schedule the pods which are tolerating the taint, then cStor pool pods also can be scheduled using this method. Tolerations are applied to cStor pool pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.

  1. apiVersion: openebs.io/v1alpha1
  2. kind: StoragePoolClaim
  3. metadata:
  4. name: cstor-disk
  5. annotations:
  6. cas.openebs.io/config: |
  7. - name: Tolerations
  8. value: |-
  9. t1:
  10. effect: NoSchedule
  11. key: nodeA
  12. operator: Equal
  13. t2:
  14. effect: NoSchedule
  15. key: app
  16. operator: Equal
  17. value: storage
  18. spec:
  19. name: cstor-disk
  20. type: disk
  21. maxPools: 3
  22. poolSpec:
  23. poolType: striped

AuxResourceLimits Policy

You can specify the AuxResourceLimits which allow you to set limits on side cars.

  1. apiVersion: openebs.io/v1alpha1
  2. kind: StoragePoolClaim
  3. metadata:
  4. name: cstor-disk
  5. annotations:
  6. cas.openebs.io/config: |
  7. - name: AuxResourceLimits
  8. value: |-
  9. memory: 0.5Gi
  10. cpu: 100m
  11. openebs.io/cas-type: cstor
  12. provisioner: openebs.io/provisioner-iscsi

AuxResourceRequests Policy

This feature is useful in cases where user has to specify minimum requests like ephemeral storage etc. to avoid erroneous eviction by K8s. AuxResourceRequests allow you to set requests on side cars. Requests have to be specified in the format expected by Kubernetes.

  1. apiVersion: openebs.io/v1alpha1
  2. kind: StoragePoolClaim
  3. metadata:
  4. name: cstor-disk
  5. annotations:
  6. cas.openebs.io/config: |
  7. - name: AuxResourceRequests
  8. value: |-
  9. memory: 0.5Gi
  10. cpu: 100m
  11. openebs.io/cas-type: cstor
  12. provisioner: openebs.io/provisioner-iscsi

Creating cStor Storage Class

StorageClass definition is an important task in the planning and execution of OpenEBS storage. As detailed in the CAS page, the real power of CAS architecture is to give an independent or a dedicated storage engine like cStor for each workload, so that granular policies can be applied to that storage engine to tune the behaviour or performance as per the workload’s need. In OpenEBS policies to the storage engine (in this case it is cStor) through the annotations specified in the StorageClass interface.

Steps to Create a cStor StorageClass

Step1: Decide the cStorPool and get the StoragePoolClaim name associated to it.

Step2: Which application uses it? Decide the replicaCount based on it.

Step3: Are there any other storage policies to be applied to the StorageClass? Refer to the storage policies section for more details on the storage policies applicable for cStor.

Step4: Create a YAML spec file <storage-class-name.yaml> from the master template below, update the pool, replica count and other policies and create the class using kubectl apply -f <storage-class-name.yaml> command.

Step5: Verify the newly created StorageClass using kubectl describe sc <storage-class-name>

Example Configuration of OpenEBS StorageClass

You can create a new StorageClass YAML called openebs-sc-rep1.yaml and add content to it from below. The following will create a StorageClass of OpenEBS volume replica of 1, Storage Pool as cstor-pool2 and CAS type as cstor.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-sparse-sc-statefulset
  5. annotations:
  6. openebs.io/cas-type: cstor
  7. cas.openebs.io/config: |
  8. - name: StoragePoolClaim
  9. value: "cstor-pool2"
  10. - name: ReplicaCount
  11. value: "1"
  12. provisioner: openebs.io/provisioner-iscsi

Setting Storage Policies

Below table lists the storage policies supported by cStor. These policies should be built into StorageClass and apply them through PersistentVolumeClaim or VolumeClaimTemplates interface.

cStor Storage PolicyMandatoryDefaultPurpose
ReplicaCountNo3Defines the number of cStor volume replicas
VolumeControllerImagequay.io/openebs/cstor-volume-mgmt:1.1.0Dedicated side car for command management like taking snapshots etc. Can be used to apply a specific issue or feature for the workload
VolumeTargetImagevalue:quay.io/openebs/cstor-istgt:1.1.0iSCSI protocol stack dedicated to the workload. Can be used to apply a specific issue or feature for the workload
StoragePoolClaimYesN/A (a valid pool must be provided)The cStorPool on which the volume replicas should be provisioned
VolumeMonitorONWhen ON, a volume exporter sidecar is launched to export Prometheus metrics.
VolumeMonitorImagequay.io/openebs/m-exporter:1.1.0Used when VolumeMonitor is ON. A dedicated metrics exporter to the workload. Can be used to apply a specific issue or feature for the workload
FSTypeext4Specifies the filesystem that the volume should be formatted with. Other values are xfs
TargetNodeSelectorDecided by Kubernetes schedulerSpecify the label in key: value format to notify Kubernetes scheduler to schedule cStor target pod on the nodes that match label
TargetResourceLimitsDecided by Kubernetes schedulerCPU and Memory limits to cStor target pod
TargetResourceRequestsDecided by Kubernetes schedulerConfiguring resource requests that need to be available before scheduling the containers.
TargetTolerationsDecided by Kubernetes schedulerConfiguring the tolerations for target.
AuxResourceLimitsDecided by Kubernetes schedulerConfiguring resource limits on the volume pod side-cars.
AuxResourceRequestsDecided by Kubernetes schedulerConfigure minimum requests like ephemeral storage etc. to avoid erroneous eviction by K8s.
Target AffinityDecided by Kubernetes schedulerThe policy specifies the label KV pair to be used both on the cStor target and on the application being used so that application pod and cStor target pod are scheduled on the same node.
Target NamespaceopenebsWhen service account name is specified, the cStor target pod is scheduled in the application’s namespace.
cStorStoragePool Replica Anti-AffinityDecided by Kubernetes schedulerFor StatefulSet applications, to distribute single replica volume on separate nodes .

Replica Count Policy

You can specify the cStor volume replica count using the ReplicaCount property. In the following example, the ReplicaCount is specified as 3. Hence, three cStor volume replicas will be created.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: ReplicaCount
  7. value: "3"
  8. openebs.io/cas-type: cstor
  9. provisioner: openebs.io/provisioner-iscsi

Volume Controller Image Policy

You can specify the cStor Volume Controller Image using the value for VolumeControllerImage property. This will help you choose the volume management image.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: VolumeControllerImage
  7. value: quay.io/openebs/cstor-volume-mgmt:1.1.0
  8. openebs.io/cas-type: cstor
  9. provisioner: openebs.io/provisioner-iscsi

Volume Target Image Policy

You can specify the cStor Target Image using the value for VolumeTargetImage property. This will help you choose the cStor istgt target image.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: VolumeTargetImage
  7. value:quay.io/openebs/cstor-istgt:1.1.0
  8. openebs.io/cas-type: cstor
  9. provisioner: openebs.io/provisioner-iscsi

Storage Pool Claim Policy

You can specify the cStor Pool Claim name using the value for StoragePoolClaim property. This will help you choose cStor storage pool where OpenEBS volume will be created. Following is the default StorageClass template where cStor volume will be created on default cStor Sparse Pool.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: StoragePoolClaim
  7. value: "cstor-sparse-pool"
  8. openebs.io/cas-type: cstor
  9. provisioner: openebs.io/provisioner-iscsi

Volume Monitor Policy

You can specify the cStor volume monitor feature which can be set using value for the VolumeMonitor property. By default, volume monitor is enabled.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - enabled: "true"
  7. name: VolumeMonitor
  8. openebs.io/cas-type: cstor
  9. provisioner: openebs.io/provisioner-iscsi

Volume Monitoring Image Policy

You can specify the monitoring image policy for a particular volume using value for VolumeMonitorImage property. The following sample storage class uses the Volume Monitor Image policy.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: VolumeMonitorImage
  7. value: quay.io/openebs/m-exporter:1.1.0
  8. openebs.io/cas-type: cstor
  9. provisioner: openebs.io/provisioner-iscsi

Volume File System Type Policy

You can specify the file system type for the cStor volume where application will consume the storage using value for FSType. The following is the sample storage class. Currently OpenEBS support ext4 as the default file system and it also supports XFS.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: FSType
  7. value: ext4
  8. openebs.io/cas-type: cstor
  9. provisioner: openebs.io/provisioner-iscsi

Target NodeSelector Policy

You can specify the TargetNodeSelector where Target pod has to be scheduled using the value for TargetNodeSelector. In following example, node: apnode is the node label.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: TargetNodeSelector
  7. value: |-
  8. node: appnode
  9. openebs.io/cas-type: cstor
  10. provisioner: openebs.io/provisioner-iscsi

Target ResourceLimits Policy

You can specify the TargetResourceLimits to restrict the memory and cpu usage of target pod within the given limit using the value for TargetResourceLimits .

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: TargetResourceLimits
  7. value: |-
  8. memory: 1Gi
  9. cpu: 100m
  10. openebs.io/cas-type: cstor
  11. provisioner: openebs.io/provisioner-iscsi

TargetResourceRequests Policy

You can specify the TargetResourceRequests to specify resource requests that need to be available before scheduling the containers.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: TargetResourceRequests
  7. value: "none"
  8. openebs.io/cas-type: cstor
  9. provisioner: openebs.io/provisioner-iscsi

TargetTolerations Policy

You can specify the TargetTolerations to specify the tolerations for target.

  1. - name: TargetTolerations
  2. value: |-
  3. t1:
  4. key: "key1"
  5. operator: "Equal"
  6. value: "value1"
  7. effect: "NoSchedule"
  8. t2:
  9. key: "key1"
  10. operator: "Equal"
  11. value: "value1"
  12. effect: "NoExecute"

AuxResourceLimits Policy

You can specify the AuxResourceLimits which allow you to set limits on side cars.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: AuxResourceLimits
  7. value: |-
  8. memory: 0.5Gi
  9. cpu: 100m
  10. openebs.io/cas-type: cstor
  11. provisioner: openebs.io/provisioner-iscsi

AuxResourceRequests Policy

This feature is useful in cases where user has to specify minimum requests like ephemeral storage etc. to avoid erroneous eviction by K8s. AuxResourceRequests allow you to set requests on side cars. Requests have to be specified in the format expected by Kubernetes

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: AuxResourceRequests
  7. value: |-
  8. memory: 0.5Gi
  9. cpu: 100m
  10. openebs.io/cas-type: cstor
  11. provisioner: openebs.io/provisioner-iscsi

Target Affinity Policy

The StatefulSet workloads access the OpenEBS storage volume by connecting to the Volume Target Pod. This policy can be used to co-locate volume target pod on the same node as workload.

The configuration for implementing this policy is different for deployment and StatefulSet applications.

For StatefulSet Applications

In the case of provisioning StatfulSet applications with replication factor of greater than “1” and volume replication factor of euqal to “1”, for a given OpenEBS volume, target and replica related to that volume should be scheduled on the same node where the application resides. This feature can be achieved by using either of the following approaches.

Approach 1:

In this approach, modification is required on StatefulSet spec and corresponding StorageClass being referred in the StatefulSet spec. Add openebs.io/sts-target-affinity: <metadata.name of STS> label in StatefulSet spec to the following fields.

  • spec.selector.matchLabels
  • spec.template.labels

Example snippet:

  1. apiVersion: apps/v1
  2. kind: StatefulSet
  3. metadata:
  4. name: test-application
  5. labels:
  6. app: test-application
  7. spec:
  8. serviceName: test-application
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. app: test-application
  13. openebs.io/sts-target-affinity: test-application
  14. template:
  15. metadata:
  16. labels:
  17. app: test-application
  18. openebs.io/sts-target-affinity: test-application

Do the following changes in the StorageClass that is referred by the claimTemplates of this StatefulSet.

  • Set volumeBindingMode to WaitForFirstConsumer

Example snippet:

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: cstor-sts
  5. annotations:
  6. openebs.io/cas-type: cstor
  7. cas.openebs.io/config: |
  8. - name: ReplicaCount
  9. value: "1"
  10. - name: StoragePoolClaim
  11. value: "cstor-sparse-pool"
  12. provisioner: openebs.io/provisioner-iscsi
  13. volumeBindingMode: WaitForFirstConsumer

Approach 2:

This approach is useful when user/tool does not have control over the StatefulSet spec. in this case, it requires a new StorageClass per StatefulSet application.

Add following changes in the StorageClass that is referred to by the claimTemplates of this StatefulSet.

  • Add openebs.io/sts-target-affinity: <metadata.name of STS> label to the following fields.
    • metadata.labels
  • Set volumeBindingMode to WaitForFirstConsumer

Example snippet:

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: cstor-sts
  5. labels:
  6. openebs.io/sts-target-affinity: test-application # name of StatefulSet application
  7. annotations:
  8. openebs.io/cas-type: cstor
  9. cas.openebs.io/config: |
  10. - name: ReplicaCount
  11. value: "1"
  12. - name: StoragePoolClaim
  13. value: "cstor-sparse-pool"
  14. provisioner: openebs.io/provisioner-iscsi
  15. volumeBindingMode: WaitForFirstConsumer

Note: It is recommended to do application pod stickiness for seamless working of the above approaches. Example YAML spec for STS can be get from here.

For Deployment Applications

This feature makes use of the Kubernetes Pod Affinity feature that is dependent on the Pod labels. User will need to add the following label to both Application and PVC.

  1. labels:
  2. openebs.io/target-affinity: <application-unique-label>

You can specify the Target Affinity in both application and OpenEBS PVC using the following way For Application Pod, it will be similar to the following.

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: fio-cstor
  5. labels:
  6. name: fio-cstor
  7. openebs.io/target-affinity: fio-cstor

The following is the sample snippet of the PVC to use Target affinity.

  1. kind: PersistentVolumeClaim
  2. apiVersion: v1
  3. metadata:
  4. name: fio-cstor-claim
  5. labels:
  6. openebs.io/target-affinity: fio-cstor

Note: This feature works only for cases where there is a 1-1 mapping between a application and PVC.

Target Namespace

By default, the cStor target pods are scheduled in a dedicated openebs namespace. The target pod also is provided with OpenEBS service account so that it can access the Kubernetes Custom Resource called CStorVolume and Events. This policy, allows the Cluster administrator to specify if the Volume Target pods should be deployed in the namespace of the workloads itself. This can help with setting the limits on the resources on the target pods, based on the namespace in which they are deployed. To use this policy, the Cluster administrator could either use the existing OpenEBS service account or create a new service account with limited access and provide it in the StorageClass as follows:

  1. annotations:
  2. cas.openebs.io/config: |
  3. - name: PVCServiceAccountName
  4. value: "user-service-account"

The sample service account can be found here.

cStorStoragePool Replica Anti-Affinity

This policy will adds the ability in cStor to correlate and hence distribute single replica volumes across pools which are in turn deployed in separate nodes when application consuming all these volumes is deployed as a StatefulSet.

Below are supported anti-affinity features:

  • openebs.io/replica-anti-affinity: <unique_identification_of_app_in_cluster>
  • openebs.io/preferred-replica-anti-affinity: <unique_identification_of_app_in_cluster>

Below is an example of a statefulset YAML spec that makes use of openebs.io/replica-anti-affinity:

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-cstor-perf
  5. annotations:
  6. cas.openebs.io/config: |
  7. - name: StoragePoolClaim
  8. value: "cstor-sparse-pool"
  9. - name: ReplicaCount
  10. value: "1"
  11. openebs.io/cas-type: cstor
  12. name: openebs-cstor-pool-sts
  13. provisioner: openebs.io/provisioner-iscsi
  14. reclaimPolicy: Delete
  15. volumeBindingMode: Immediate
  16. ---
  17. apiVersion: v1
  18. kind: Service
  19. metadata:
  20. labels:
  21. app: busybox1
  22. name: busybox1
  23. spec:
  24. clusterIP: None
  25. selector:
  26. app: busybox1
  27. ---
  28. apiVersion: apps/v1
  29. kind: StatefulSet
  30. metadata:
  31. name: busybox1
  32. labels:
  33. app: busybox1
  34. spec:
  35. serviceName: busybox1
  36. replicas: 1
  37. selector:
  38. matchLabels:
  39. app: busybox1
  40. openebs.io/replica-anti-affinity: busybox1
  41. template:
  42. metadata:
  43. labels:
  44. app: busybox1
  45. openebs.io/replica-anti-affinity: busybox1
  46. spec:
  47. terminationGracePeriodSeconds: 1800
  48. containers:
  49. - name: busybox1
  50. image: ubuntu
  51. imagePullPolicy: IfNotPresent
  52. command:
  53. - sleep
  54. - infinity
  55. volumeMounts:
  56. - name: busybox1
  57. mountPath: /busybox1
  58. volumeClaimTemplates:
  59. - metadata:
  60. name: busybox1
  61. spec:
  62. accessModes: [ "ReadWriteOnce" ]
  63. storageClassName: openebs-cstor-pool-sts
  64. resources:
  65. requests:
  66. storage: 2Gi

Upgrade the Software Version of a cStor pool

The steps for upgrading cStor Pool is mentioned in Upgrade section. Refer Upgrade section for more details.

Monitor a cStor Pool

A new sidecar will run once a cStor pool pod is created.This sidecar will collect the metrics of the corresponding cStorStoragePool. Following metrics are supported by cStor to export the cStorStoragePool usage statistics as Prometheus metrics.

  1. openebs_volume_replica_available_size # Available size of volume replica on a pool
  2. openebs_volume_replica_used_size # Used size of volume replica on a pool
  3. openebs_dispatched_io_count # Dispatched IO's count
  4. openebs_free_pool_capacity # Free capacity in pool
  5. openebs_inflight_io_count # Inflight IO's count
  6. openebs_maya_exporter_version # A metric with a constant '1' value labeled by commit and version from which maya-exporter was built.
  7. openebs_pool_size # Size of pool
  8. openebs_pool_status # Status of pool (0, 1, 2, 3, 4, 5, 6)= {"Offline", "Online", "Degraded", "Faulted", "Removed", "Unavail", "NoPoolsAvailable"}
  9. openebs_read_latency # Read latency on replica
  10. openebs_rebuild_bytes # Rebuild bytes
  11. openebs_rebuild_count # Rebuild count
  12. openebs_rebuild_status # Status of rebuild on replica (0, 1, 2, 3, 4, 5, 6)= {"INIT", "DONE", "SNAP REBUILD INPROGRESS", "ACTIVE DATASET REBUILD INPROGRESS", "ERRORED", "FAILED", "UNKNOWN"}
  13. openebs_replica_status # Status of replica (0, 1, 2, 3) = {"Offline", "Healthy", "Degraded", "Rebuilding"}
  14. openebs_total_rebuild_done # Total number of rebuild done on replica
  15. openebs_sync_count # Total number of sync on replica
  16. openebs_sync_latency # Sync latency on replica
  17. openebs_total_failed_rebuild # Total number of failed rebuilds on replica
  18. openebs_total_read_bytes # Total read in bytes
  19. openebs_total_read_count # Total read io count
  20. openebs_total_rebuild_done # Total number of rebuild done on replica
  21. openebs_total_write_bytes # Total write in bytes
  22. openebs_total_write_count # Total write io count
  23. openebs_used_pool_capacity # Capacity used by pool
  24. openebs_used_pool_capacity_percent # Capacity used by pool in percent
  25. openebs_used_size Used # size of pool and volume
  26. openebs_volume_status # Status of volume (0, 1, 2, 3) = {"Offline", "Healthy", "Degraded", "Rebuilding"}
  27. openebs_write_latency # Write latency on replica
  28. openebs_zfs_command_error # zfs command error counter
  29. openebs_zfs_list_command_error # zfs list command error counter
  30. openebs_zfs_parse_error # zfs parse error counter
  31. openebs_zfs_list_failed_to_initialize_libuzfs_client_error_counter # Total no of failed to initialize libuzfs client error in zfs list command
  32. openebs_zfs_list_no_dataset_available_error_counter # Total number of no datasets error in zfs list command
  33. openebs_zfs_list_parse_error # Total number of zfs list parse errors
  34. openebs_zfs_list_request_reject_count # Total number of rejected requests of zfs list
  35. openebs_zfs_stats_command_error # Total number of zfs command errors
  36. openebs_zfs_stats_parse_error_counter # Total number of zfs stats parse errors
  37. openebs_zfs_stats_reject_request_count # Total number of rejected requests of zfs stats
  38. openebs_zpool_list_command_error # Total number of zpool command error counter
  39. openebs_zpool_list_failed_to_initialize_libuzfs_client_error_counter # Total number of initialize libuzfs client error
  40. openebs_zpool_list_incomplete_stdout_error # Total number of incomplete stdout errors
  41. openebs_zpool_list_no_pool_available_error # Total number of no pool available errors
  42. openebs_zpool_list_parse_error_count # Total number of parsing errors
  43. openebs_zpool_list_reject_request_count # Total number of rejected requests of zpool command

Setting Performance Tunings

Allow users to set available performance tunings in StorageClass based on their workload. Below are the tunings that are required:

  • cStor target queue depth
    • This limits the ongoing IO count from client. Default is 32.
  • cStor target worker threads
    • Sets the number of threads that are working on above queue. It is mentioned by Luworkers. Default value is 6. In case of better number of cores and RAM, this value can be 16. This means 16 threads will be running for each volume.
  • cStor volume replica worker threads
    • This Is associated with cStorVolumeReplica.
    • It is mentioned by ZvolWorkers.
    • Defaults to the number of cores on the machine.
    • In case of better number of cores and RAM, this value can be 16.

Note: These configuration can be only used during volume provisioning. Default values will be used in case of “Invalid/None” values has been provided using configuration.

Example Configuration:

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-cstor-pool
  5. annotations:
  6. openebs.io/cas-type: cstor
  7. cas.openebs.io/config: |
  8. - name: StoragePoolClaim
  9. value: "sparse-claim-auto"
  10. - name: QueueDepth
  11. value: "32"
  12. - name: Luworkers
  13. value: "16"
  14. - name: ZvolWorkers
  15. value: "16"
  16. provisioner: openebs.io/provisioner-iscsi

Note: For sequential workload, setting luworkers to 1 is good. For random workload, default setting to 6 is good.

Expanding cStor Pool to a New Node

cStorPools can be horizontally scaled when needed typically when a new Kubernetes node is added or when the existing cStorPool instances become full with cStorVolumes. This feature is added in 0.8.1.

The steps for expanding the pool to new nodes is given below.

With specifiying blockDeviceList

If you are following this approach, you should have created cStor Pool initially using the steps provided here. For expanding pool onto a new OpenEBS node, you have to edit corresponding pool configuration(SPC) YAML with the required block device names under the blockDeviceList .

Step 1: Edit the existing pool configuration spec that you originally used and apply it (OR) directly edit the in-use spec file using kubectl edit spc <SPC Name>.

Step 2: Add the new disks names from the new Nodes under the blockDeviceList . You can use kubectl get blockdevice -n <openebs_namespace>to obtains the disk CRs.

Step 3: Apply or save the configuration file and a new cStorPool instance will be created on the expected node.

Step 4: Verify the new pool creation by checking

  • If a new cStor Pool POD is created (kubectl get pods -n openebs | grep <pool name>)
  • If a new cStorPool CR is created (kubectl get csp)

Expanding Size of a cStor Pool Instance on a Node (by adding physical/virtual disks to a pool instance)

A pool instance is local to a node. A pool instance can be started with as small as one disk (in striped mode) or two disks (in mirrored) mode. cStor pool instances support thin provisioning of data, which means that provisioning of any volume size will be successful from a given cstorPool config.

However, as the actual used capacity of the pool is utilized, more disks need to be added. Currently the steps for adding more disks to the existing pool is done through manual operations.You can add more disks to the existing StoragePool with the steps provide here.

Expanding size of a cStor Pool Instance on a Node (by expanding the size of cloud disks)

When you have a cloud disk and which is used for the creation of cStor Storage pool and when you want to expand the existing cStor pool capacity, you can expand the size of the cloud disk and reflect the change in the corresponding cStor storage pool. There by the cStor pool capacity can be increased. The steps for doing this activity is documented here.

Expanding the cStor Volume Capacity

OpenEBS control plane does not support increasing the size of volume seamlessly. Increasing the size of a provisioned volume requires support from Kubernetes kubelet as the existing connection has to be remounted to reflect the new volume size. This can also be tackled with the new CSI plugin where the responsibility of the mount, unmount and remount actions will be held with the vendor CSI plugin rather than the kubelet itself.

OpenEBS team is working on both the CSI plugin as well as the feature to resize the provisioned volume when the PVC is patched for new volume size. Currently this is a manual operation and the steps for expanding the cStor volume is mentioned here.

See Also:

Understand cStorPools

cStorPool use case for Prometheus

cStor roadmap

c