Alpha Features

This section give different features of OpenEBS which is presently in Alpha version. These features are not recommend to perform on a production clusters. We suggest to familiarize these features on test clusters and reach out to OpenEBS community slack if you have any queries or help on trying out these features.

Note : Upgrade is not supported for features in Alpha version.

cStor

Running a sample application on a cStor volume provisioned via CSI provisioner

Provisioning cStor pool using CSPC operator

Expand a cStor volume created using CSI provisioner

Snapshot and Cloning the cStor volume created using CSI provisioner

Blockdevice replacement in a cStor pool created using CSPC operator

Jiva

Run a sample application on Jiva volume provisioned via Jiva CSI Provisioner

Running a sample application on a cStor volume provisioned via CSI provisioner

The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized workloads on Container Orchestration Systems(COs) like Kubernetes combined with different storage vendors. This means, implementing a single CSI for a storage vendor is guaranteed to work with all COs. OpenEBS cStor volume can be now provisioned with CSI driver from OpenEBS 1.2 version onwards. This feature is under active development and considered to be in Alpha state.

Note: The current implementation only supports provisioning, de-provisioning, expansion and snapshot and clone of cStor Volumes .

Prerequisites

  • Kubernetes version 1.14 or higher is installed.
  • iSCSI initiator utils to be installed on all the worker nodes.
  • Recommended OpenEBS Version is 1.4 or above . The steps to install OpenEBS is here.
  • You have access to install RBAC components into kube-system namespace. The OpenEBS cStor CSI driver components are installed in kube-system namespace to allow them to be flagged as system critical components.
  • You need to enable the feature gates ExpandCSIVolumes and ExpandInUsePersistentVolumes on kubelet in each worker node.
  • You need to enable the feature gates ExpandCSIVolumes and ExpandInUsePersistentVolumes on kube-apiserver in the master node.
  • Base OS on worker nodes can be Ubuntu 16.04, Ubuntu 18.04 or CentOS.

Overview

  • Install OpenEBS CSI Driver
  • Provision a cStor Pool Cluster
  • Create a cStor StorageClass with cStor CSI provisioner
  • Run your application on cStor volume provisioned via CSI Provisioner

Install OpenEBS CSI Driver

The node components make use of the host iSCSI binaries for iSCSI connection management. Depending on the OS, the csi-operator will have to be modified to load the required iSCSI files into the node pods.

OpenEBS cStor CSI driver components can be installed by running the following command:

Depending on the OS, select the appropriate deployment file.

  • For Ubuntu 16.04 and CentOS:

    1. kubectl apply -f https://raw.githubusercontent.com/openebs/charts/master/docs/csi-operator-1.8.0.yaml
  • For Ubuntu 18.04:

    1. kubectl apply -f https://raw.githubusercontent.com/openebs/charts/master/docs/csi-operator-1.8.0-ubuntu-18.04.yaml

Verify that the OpenEBS CSI Components are installed.

  1. kubectl get pods -n kube-system

Example output:

NAME READY STATUS RESTARTS AGE event-exporter-v0.2.5-7df89f4b8f-ml8qz 2/2 Running 0 35m fluentd-gcp-scaler-54ccb89d5-jk4gs 1/1 Running 0 35m fluentd-gcp-v3.1.1-56976 2/2 Running 0 35m fluentd-gcp-v3.1.1-jvqxn 2/2 Running 0 35m fluentd-gcp-v3.1.1-kwvsx 2/2 Running 0 35m heapster-7966498b57-w4mrs 3/3 Running 0 35m kube-dns-5877696fb4-jftrh 4/4 Running 0 35m kube-dns-5877696fb4-s6dgg 4/4 Running 0 35m kube-dns-autoscaler-85f8bdb54-m584t 1/1 Running 0 35m kube-proxy-gke-ranjith-csi-default-pool-a9a13f27-6qv1 1/1 Running 0 35m kube-proxy-gke-ranjith-csi-default-pool-a9a13f27-cftl 1/1 Running 0 35m kube-proxy-gke-ranjith-csi-default-pool-a9a13f27-q5ws 1/1 Running 0 35m l7-default-backend-8f479dd9-zxbtf 1/1 Running 0 35m metrics-server-v0.3.1-8d4c5db46-fw66z 2/2 Running 0 35m openebs-cstor-csi-controller-0 6/6 Running 0 77s openebs-cstor-csi-node-hflmf 2/2 Running 0 73s openebs-cstor-csi-node-mdgqq 2/2 Running 0 73s openebs-cstor-csi-node-rwshl 2/2 Running 0 73s prometheus-to-sd-5b68q 1/1 Running 0 35m prometheus-to-sd-c5bwl 1/1 Running 0 35m prometheus-to-sd-s7fdv 1/1 Running 0 35m stackdriver-metadata-agent-cluster-level-8468cc67d8-p864w 1/1 Running 0 35m

From above output, openebs-cstor-csi-controller-0 is running and openebs-cstor-csi-node-hflmf , openebs-cstor-csi-node-mdgqq and openebs-cstor-csi-node-rwshl running in each worker node.

Provision a cStor Pool Cluster

Apply cStor operator YAML file using the following command:

  1. kubectl apply -f https://raw.githubusercontent.com/openebs/charts/master/docs/cstor-operator-1.8.0.yaml

Verify the status of CSPC operator using the following command:

  1. kubectl get pod -n openebs -l name=cspc-operator

Example output:

NAME READY STATUS RESTARTS AGE cspc-operator-c4dc96bb9-km4dh 1/1 Running 0 43s

Now, You have to create a cStor Pool Cluster(CSPC) which is the group of cStor pools in the cluster. CSPC can be created by applying the sample YAML provided below:

apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-disk-cspc namespace: openebs spec: pools: - nodeSelector: kubernetes.io/hostname: “gke-ranjith-csi-default-pool-a9a13f27-6qv1” raidGroups: - type: “stripe” isWriteCache: false isSpare: false isReadCache: false blockDevices: - blockDeviceName: “blockdevice-936911c5c9b0218ed59e64009cc83c8f” poolConfig: cacheFile: “” defaultRaidGroupType: “stripe” overProvisioning: false compression: “off”

Edit the following parameters in the sample CSPC YAML:

  • blockDeviceName:- Provide the block devices name to be used for provisioning cStor pool. Each storage pool will be created on a node using the blockdevices attached to the node.

  • kubernetes.io/hostname: Provide the hostname where the cStor pool will be created using the set of block devices.

The above sample YAML creates a cStor pool of striped type using the provided block device on the corresponding node. If you need to create multiple cStor pools in the cluster with different RAID types, go to provisioning CSPC cluster creation section.

In this example, the above YAML is modified and saved as cspc.yaml. Apply the modified CSPC YAML spec using the following command to create a cStor Pool Cluster:

  1. kubectl apply -f cspc.yaml

Verify the cStor pool details by running the following command:

  1. kubectl get cspc -n openebs

Example output:

NAME AGE cstor-disk-cspc 23s

Verify if the cStor pool instance is created successfully using the following command:

  1. kubectl get cspi -n openebs

Example output:

NAME HOSTNAME ALLOCATED FREE CAPACITY STATUS AGE cstor-disk-cspc-7hkl gke-ranjith-csi-default-pool-a9a13f27-6qv1 50K 39.7G 39.8G ONLINE 2m43s

Create a cStor StorageClass with cStor CSI provisioner

Create a Storage Class to dynamically provision volumes using cStor CSI provisioner. You can save the following sample StorageClass YAML spec as cstor-csi-sc.yaml.

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:
  4. name: openebs-csi-cstor-disk
  5. provisioner: cstor.csi.openebs.io
  6. allowVolumeExpansion: true
  7. parameters:
  8. cas-type: cstor
  9. replicaCount: "1"
  10. cstorPoolCluster: cstor-disk-cspc

You should specify the correct cstorPoolCluster name from your cluster and specify the desired replicaCount for the cStor volume.

Note: The replicaCount should be less than or equal to the max pools available.

Sample StorageClass YAML spec can be found in github repo.

Apply the above sample Storage Class YAML using the following command:

  1. kubectl apply -f cstor-csi-sc.yaml

Example output:

  1. NAME PROVISIONER AGE
  2. openebs-csi-cstor-disk cstor.csi.openebs.io 5s
  3. openebs-device openebs.io/local 59m
  4. openebs-hostpath openebs.io/local 59m
  5. openebs-jiva-default openebs.io/provisioner-iscsi 59m
  6. openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 59m
  7. standard (default) kubernetes.io/gce-pd 66m

The StorageClass openebs-csi-cstor-disk is created successfully.

Run your application on cStor volume provisioned via CSI Provisioner

Run your application by specifying the above created StorageClass for creating the PVC. Sample application YAML can be downloaded using the following command:

  1. wget https://raw.githubusercontent.com/openebs/cstor-csi/master/deploy/busybox-csi-cstor-sparse.yaml

Modify the YAML spec with required PVC storage size, storageClassName. In this example, storageClassName is updated with openebs-csi-cstor-disk.

The following example launches a busybox app with a cStor Volume provisioned via CSI Provisioner.

  1. kubectl apply -f busybox-csi-cstor-sparse.yaml

Now the application will be running on the volume provisioned via cStor CSI provisioner. Verify the status of the PVC using the following command:

  1. kubectl get pvc

Example output:

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo-csivol-claim Bound pvc-723283b6-02bc-11ea-a139-42010a8000b2 5Gi RWO openebs-csi-cstor-disk 17m

Verify the status of the application by running the following command:

  1. kubectl get pod

Example output:

NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 97s

Verify if the application is running with the expected result using the following command:

  1. kubectl exec -it busybox -- cat /mnt/openebs-csi/date.txt

The busybox is instructed to write the date into the mounted path at /.mnt/openebs-csi/date.txt when it is started .

Example output:

Sat Nov 9 06:59:27 UTC 2019

Note: While the asynchronous handling of the Volume provisioning is in progress, the application pod description may throw some errors like:

  • Waiting for CVC to be bound: Implies volume components are still being created

  • Volume is not ready: Replicas yet to connect to controller: Implies volume components are already created but yet to interact with each other.

Expand a cStor volume created using CSI provisioner

The following section will give the steps to expand a cStor volume which is created using CSI provisioner.

Notes to remember:

  • Only dynamically provisioned cStor volumes can be resized.
  • You can only expand cStor volumes containing a file system if the file system is ext3, ext4 or xfs.
  • Ensure that the corresponding StorageClass has the allowVolumeExpansion field set to true when the volume is provisioned.
  • You will need to enable ExpandCSIVolumes and ExpandInUsePersistentVolumes feature gates on kubelets and kube-apiserver. Other general prerequisites related to cStor volume via CSI provisioner can be found from here.

Steps to perform the cStor volume expansion:

  1. Perform the following command to get the details of the PVC.

    1. kubectl get pvc

    Example output:

    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo-csivol-claim Bound pvc-723283b6-02bc-11ea-a139-42010a8000b2 5Gi RWO openebs-csi-cstor-disk 66m

  2. Update the increased PVC size in the following section of the PVC YAML.

    • pvc.spec.resources.requests.storage.

    This can be done by editing the PVC YAML spec using the following command:

    1. kubectl edit pvc demo-csivol-claim

    Example snippet:

    spec: accessModes: - ReadWriteOnce resources: requests: storage: 9Gi storageClassName: openebs-csi-cstor-disk

    In the above snippet, storage is modified to 9Gi from 5Gi.

  3. Wait for the updated capacity to reflect in PVC status (pvc.status.capacity.storage). Perform the following command to verify the updated size of the PVC:

    1. kubectl get pvc

    Example snippet:

    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo-csivol-claim Bound pvc-723283b6-02bc-11ea-a139-42010a8000b2 9Gi RWO openebs-csi-cstor-disk 68m

  4. Check whether the size is reflected on the application pod where the above volume is mounted.

Snapshot and Cloning the cStor volume created using CSI provisioner

The following section will give the steps to take snapshot and clone the cStor volume created using CSI provisioner.

Notes to remember:

  • You will need to enable VolumeSnapshotDataSource Feature Gate on kubelet and kube-apiserver. Other general prerequisites related to cStor volume via CSI provisioner can be found from here.

  • Recommended OpenEBS Version is 1.8

Capture the snapshot and cloning the cStor volume:

  1. Get the details of PVC and PV of the CSI based cStor volume using the following command:

    PVC:

    1. kubectl get pvc

    Example output:

    1. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    2. demo-csivol-claim Bound pvc-c4868664-1a84-11ea-a1ad-42010aa00fd2 5Gi RWO openebs-csi-cstor-disk 8m39s

    PV:

    1. kubectl get pv

    Example output:

    1. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    2. pvc-c4868664-1a84-11ea-a1ad-42010aa00fd2 5Gi RWO Delete Bound default/demo-csivol-claim openebs- csi-cstor-disk 22s
  2. Create a snapshot class pointing to cStor CSI driver. The following command will create a snapshot class pointing to cStor CSI driver:

    1. kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/deploy/snapshot-class.yaml

    Verify if snapshot class is created successfully using the following command:

    1. kubectl get volumesnapshotclass

    Example output:

    1. NAME AGE
    2. csi-cstor-snapshotclass 94s
  3. Get the YAML for snapshot creation of a PVC using the following command:

    1. wget https://raw.githubusercontent.com/openebs/cstor-csi/master/deploy/snapshot.yaml

    In this example, downloaded file is saved as snapshot.yaml.

  4. Edit the snapshot.yaml which is created in previous step to update:

    metedata.name :- Name of the snapshot

    spec.snapshotClassName :- Name of the snapshotClass pointing to cStor CSI driver which you can get from step 2.

    spec.source.name :- Source PVC, for which you are going to take the snapshot.

  5. Apply the modified snapshot YAML using the following command:

    1. kubectl apply -f snapshot.yaml

    Verify if the snapshot has been created successfully using the following command:

    1. kubectl get volumesnapshots.snapshot

    Example output:

    1. NAME AGE
    2. demo-snapshot 16s

    The output shows that snapshot of the source PVC is created successfully.

  6. Now, let’s create clone volume using the above snapshot. Get the PVC YAML spec for creating the clone volume from the given snapshot.

    1. wget https://raw.githubusercontent.com/openebs/cstor-csi/master/deploy/pvc-clone.yaml

    The downloaded file is saved as pvc-clone.yaml.

  7. Edit the downloaded clone PVC YAML spec to update:

    • metadata.name :- Name of the clone PVC.
    • spec.storageClassName :- Same StorageClass used while creating the source PVC.
    • spec.dataSource.name :- Name of the snapshot.
    • spec.resources.requests.storage :- The size of the volume being cloned or restored. This should be same as source PVC.
  8. Run the following command with the modified clone PVC YAML to create a cloned PVC.

    1. kubectl apply -f pvc-clone.yaml
  9. Verify the status of new cloned PVC and PV using the following command:

    PVC:

    1. kubectl get pvc

    Example output:

    1. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    2. demo-csivol-claim Bound pvc-c4868664-1a84-11ea-a1ad-42010aa00fd2 5Gi RWO openebs-csi-cstor-disk 18m
    3. pvc-clone Bound pvc-43340dc6-1a87-11ea-a1ad-42010aa00fd2 5Gi RWO openebs-csi-cstor-disk 16s

    PV:

    1. kubectl get pv

    Example output:

    1. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    2. pvc-43340dc6-1a87-11ea-a1ad-42010aa00fd2 5Gi RWO Delete Bound default/pvc-clone openebs-csi-cstor-disk 17s
    3. pvc-c4868664-1a84-11ea-a1ad-42010aa00fd2 5Gi RWO Delete Bound default/demo-csivol-claim openebs-csi-cstor-disk 9m43s
  10. Now this cloned volume can be mounted in application and access the snapshot data.

Provisioning cStor pool using CSPC operator

CSPC is a new schema for cStor pool provisioning and also refactors the code to make the cStor a completely pluggable engine into OpenEBS. The new schema also makes it easy to perform day 2 operations on cStor pools. The following are the new terms related to CSPC:

  • CStorPoolcluster(CSPC)
  • CStorPoolInstance(CSPI)
  • cspc-operator

Note: Volume provisioning on CSPC pools will be supported only via OpenEBS CSI provisioner.

The current workflow to provision CSPC pool is as follows:

  1. OpenEBS should be installed. Recommended OpenEBS version is 1.8.
  2. Install CSPC operator using YAML.
  3. Identify the available blockdevices which are Unclaimed and Active.
  4. Apply the CSPC pool YAML spec by filling required fields.
  5. Verify the CSPC pool details.

Install OpenEBS

Latest OpenEBS version can be installed using the following command:

  1. kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.8.0.yaml

Verify if OpenEBS pods are in Running state using the following command:

  1. kubectl get pod -n openebs

Example output:

  1. NAME READY STATUS RESTARTS AGE
  2. maya-apiserver-77f9cc9f9b-jg825 1/1 Running 3 90s
  3. openebs-admission-server-8c5b8565-d2q58 1/1 Running 0 79s
  4. openebs-localpv-provisioner-f458bc8c4-bjmkq 1/1 Running 0 78s
  5. openebs-ndm-lz4n6 1/1 Running 0 80s
  6. openebs-ndm-operator-7d7c9d966d-bqlnj 1/1 Running 1 79s
  7. openebs-ndm-spm7f 1/1 Running 0 80s
  8. openebs-ndm-tm8ff 1/1 Running 0 80s
  9. openebs-provisioner-5fbd8fc74c-6zcnq 1/1 Running 0 82s
  10. openebs-snapshot-operator-7d6dd4b77f-444zh 2/2 Running 0 81s

Install CSPC Operator

Install CSPC operator by using the following command:

  1. kubectl apply -f https://raw.githubusercontent.com/openebs/charts/master/docs/cstor-operator-1.8.0.yaml

Verify if CSPC operator is in Running state using the following command:

  1. kubectl get pod -n openebs -l name=cspc-operator

Example output:

  1. NAME READY STATUS RESTARTS AGE
  2. cspc-operator-c4dc96bb9-zvfws 1/1 Running 0 115s

Identify the blockdevices

Get the details of all blockdevices attached in the cluster using the following command. Identify the available blockdevices which are Unclaimed and Active. Also verify these identified blockdevices does not contain any filesystem. These are the candidates for CSPC pool creation which need to be used in next step.

  1. kubectl get bd -n openebs

Example output:

  1. NAME NODENAME SIZE CLAIMSTATE STATUS AGE
  2. blockdevice-1c10eb1bb14c94f02a00373f2fa09b93 gke-ranjith-cspc-default-pool-f7a78720-zr1t 42949672960 Unclaimed Active 7h43m
  3. blockdevice-2594fa672b07f200f299f59cad340326 gke-ranjith-cspc-default-pool-f7a78720-9436 42949672960 Unclaimed Active 40s
  4. blockdevice-77f834edba45b03318d9de5b79af0734 gke-ranjith-cspc-default-pool-f7a78720-k1cr 42949672960 Unclaimed Active 7h43m
  5. blockdevice-936911c5c9b0218ed59e64009cc83c8f gke-ranjith-cspc-default-pool-f7a78720-9436 42949672960 Unclaimed Active 7h44m

In the above example, two blockdevices are attached to one node and one disk each attached to other two nodes.

Apply the CSPC pool YAML

Create a CSPC pool YAML spec to provision CSPC pools using the sample template provided below.

  1. apiVersion: openebs.io/v1alpha1
  2. kind: CStorPoolCluster
  3. metadata:
  4. name: <CSPC_name>
  5. namespace: openebs
  6. spec:
  7. pools:
  8. - nodeSelector:
  9. kubernetes.io/hostname: "<Node_name>"
  10. raidGroups:
  11. - type: "<RAID_type>"
  12. isWriteCache: false
  13. isSpare: false
  14. isReadCache: false
  15. blockDevices:
  16. - blockDeviceName: "<blockdevice_name>"
  17. poolConfig:
  18. cacheFile: ""
  19. defaultRaidGroupType: "<<RAID_type>>"
  20. overProvisioning: false
  21. compression: "off"

Here, we describe the parameters used in above CSPC pool creation template.

  • CSPC_name :- Name of CSPC cluster
  • Node_name :- Name of node where pool is to be created using the available blockdevices attached to the node.
  • RAID_type :- RAID configuration used for pool creation. Supported RAID types are stripe, mirror, raidz and raidz2. If spec.pools.raidGroups.type is specified, then spec.pools.poolConfig.defaultRaidGroupType will not consider for the particular raid groups.
  • blockdevice_name :- Identify the available blockdevices which are Unclaimed and Active. Also verify these identified blockdevices does not conatin any filesystem.

This is a sample CSPC template YAML configuration which will provision a cStor pool using CSPC operator. The following snippet describe the pool details of one node. If there are multiple pools to be created on different nodes, add below configuration for each node.

  1. - nodeSelector:
  2. kubernetes.io/hostname: "<Node1_name>"
  3. raidGroups:
  4. - type: "<RAID_type>"
  5. isWriteCache: false
  6. isSpare: false
  7. isReadCache: false
  8. blockDevices:
  9. - blockDeviceName: "<blockdevice_name>"
  10. poolConfig:
  11. cacheFile: ""
  12. defaultRaidGroupType: "<<RAID_type>>"
  13. overProvisioning: false
  14. compression: "off"

The following are some of the sample CSPC configuration YAML spec:

  • Striped- One striped pool on each node using blockdevice attached to the node. In the below example, one node has 2 blockdevices and other two nodes having one disk each.

    1. apiVersion: openebs.io/v1alpha1
    2. kind: CStorPoolCluster
    3. metadata:
    4. name: cstor-pool-stripe
    5. namespace: openebs
    6. spec:
    7. pools:
    8. - nodeSelector:
    9. kubernetes.io/hostname: "gke-ranjith-cspc-default-pool-f7a78720-9436"
    10. raidGroups:
    11. - type: "stripe"
    12. isWriteCache: false
    13. isSpare: false
    14. isReadCache: false
    15. blockDevices:
    16. - blockDeviceName: "blockdevice-936911c5c9b0218ed59e64009cc83c8f"
    17. - blockDeviceName: "blockdevice-2594fa672b07f200f299f59cad340326"
    18. poolConfig:
    19. cacheFile: ""
    20. defaultRaidGroupType: "stripe"
    21. overProvisioning: false
    22. compression: "off"
    23. - nodeSelector:
    24. kubernetes.io/hostname: "gke-ranjith-cspc-default-pool-f7a78720-k1cr"
    25. raidGroups:
    26. - type: "stripe"
    27. isWriteCache: false
    28. isSpare: false
    29. isReadCache: false
    30. blockDevices:
    31. - blockDeviceName: "blockdevice-77f834edba45b03318d9de5b79af0734"
    32. poolConfig:
    33. cacheFile: ""
    34. defaultRaidGroupType: "stripe"
    35. overProvisioning: false
    36. compression: "off"
    37. - nodeSelector:
    38. kubernetes.io/hostname: "gke-ranjith-cspc-default-pool-f7a78720-zr1t"
    39. raidGroups:
    40. - type: "stripe"
    41. isWriteCache: false
    42. isSpare: false
    43. isReadCache: false
    44. blockDevices:
    45. - blockDeviceName: "blockdevice-1c10eb1bb14c94f02a00373f2fa09b93"
    46. poolConfig:
    47. cacheFile: ""
    48. defaultRaidGroupType: "stripe"
    49. overProvisioning: false
    50. compression: "off"
  • Mirror- One mirror pool on one node using 2 blockdevices.

    1. apiVersion: openebs.io/v1alpha1
    2. kind: CStorPoolCluster
    3. metadata:
    4. name: cstor-pool-stripe
    5. namespace: openebs
    6. spec:
    7. pools:
    8. - nodeSelector:
    9. kubernetes.io/hostname: "gke-ranjith-cspc-default-pool-f7a78720-9436"
    10. raidGroups:
    11. - type: "mirror"
    12. isWriteCache: false
    13. isSpare: false
    14. isReadCache: false
    15. blockDevices:
    16. - blockDeviceName: "blockdevice-936911c5c9b0218ed59e64009cc83c8f"
    17. - blockDeviceName: "blockdevice-78f6be57b9eca9c08a2e18e8f894df30"
    18. poolConfig:
    19. cacheFile: ""
    20. defaultRaidGroupType: "mirror"
    21. overProvisioning: false
    22. compression: "off"
  • RAIDZ- Single parity raid configuration with 3 blockdevices.

    1. apiVersion: openebs.io/v1alpha1
    2. kind: CStorPoolCluster
    3. metadata:
    4. name: cstor-pool-stripe
    5. namespace: openebs
    6. spec:
    7. pools:
    8. - nodeSelector:
    9. kubernetes.io/hostname: "gke-ranjith-cspc-default-pool-f7a78720-9436"
    10. raidGroups:
    11. - type: "raidz"
    12. isWriteCache: false
    13. isSpare: false
    14. isReadCache: false
    15. blockDevices:
    16. - blockDeviceName: "blockdevice-936911c5c9b0218ed59e64009cc83c8f"
    17. - blockDeviceName: "blockdevice-78f6be57b9eca9c08a2e18e8f894df30"
    18. - blockDeviceName: "blockdevice-77f834edba45b03318d9de5b79af0734"
    19. poolConfig:
    20. cacheFile: ""
    21. defaultRaidGroupType: "raidz"
    22. overProvisioning: false
    23. compression: "off"
  • RAIDZ2- Dual parity raid configuration with 6 blockdevices.

    1. apiVersion: openebs.io/v1alpha1
    2. kind: CStorPoolCluster
    3. metadata:
    4. name: cstor-pool-stripe
    5. namespace: openebs
    6. spec:
    7. pools:
    8. - nodeSelector:
    9. kubernetes.io/hostname: "gke-ranjith-cspc-default-pool-f7a78720-9436"
    10. raidGroups:
    11. - type: "raidz2"
    12. isWriteCache: false
    13. isSpare: false
    14. isReadCache: false
    15. blockDevices:
    16. - blockDeviceName: "blockdevice-936911c5c9b0218ed59e64009cc83c8f"
    17. - blockDeviceName: "blockdevice-78f6be57b9eca9c08a2e18e8f894df30"
    18. - blockDeviceName: "blockdevice-77f834edba45b03318d9de5b79af0734"
    19. - blockDeviceName: "blockdevice-2594fa672b07f200f299f59cad340326"
    20. - blockDeviceName: "blockdevice-cbd2dc4f3ff3f463509b695173b6064b"
    21. - blockDeviceName: "blockdevice-1c10eb1bb14c94f02a00373f2fa09b93"
    22. poolConfig:
    23. cacheFile: ""
    24. defaultRaidGroupType: "raidz2"
    25. overProvisioning: false
    26. compression: "off"

    Next, you can provision a cStor volume and then provision application on this volume. The steps can be found here.

Verify CSPC Pool Details

Verify if the pool is in Running state by checking the status of CSPC, CSPI and pod running in openebs namespace.

The following command will get the details of CSPC status:

  1. kubectl get cspc -n openebs

Example output:

  1. NAME AGE
  2. cstor-pool-stripe 18s

The following command will get the details of CSPI status:

  1. kubectl get cspi -n openebs

Example output:

  1. NAME HOSTNAME ALLOCATED FREE CAPACITY STATUS AGE
  2. cstor-pool-stripe-cfsm gke-ranjith-cspc-default-pool-f7a78720-zr1t 69.5K 39.7G 39.8G ONLINE 87s
  3. cstor-pool-stripe-mnbh gke-ranjith-cspc-default-pool-f7a78720-k1cr 69.5K 39.7G 39.8G ONLINE 87s
  4. cstor-pool-stripe-sxpr gke-ranjith-cspc-default-pool-f7a78720-9436 69.5K 79.5G 79.5G ONLINE 87s

The following command will get the details of CSPC pool pod status:

  1. kubectl get pod -n openebs | grep -i <CSPC_name>

Example command:

  1. kubectl get pod -n openebs | grep -i cstor-pool-stripe

Example output:

  1. cstor-pool-stripe-cfsm-b947988c7-sdtjz 3/3 Running 0 25s
  2. cstor-pool-stripe-mnbh-74cb58df69-tpkm6 3/3 Running 0 25s
  3. cstor-pool-stripe-sxpr-59c5f46fd6-jz4n4 3/3 Running 0 25s

Also verify if all the blockdevices are claimed correctly by checking the CLAIMSTATE.

  1. kubectl get bd -n openebs

Example output:

  1. NAME NODENAME SIZE CLAIMSTATE STATUS AGE
  2. blockdevice-1c10eb1bb14c94f02a00373f2fa09b93 gke-ranjith-cspc-default-pool-f7a78720-zr1t 42949672960 Claimed Active 7h47m
  3. blockdevice-2594fa672b07f200f299f59cad340326 gke-ranjith-cspc-default-pool-f7a78720-9436 42949672960 Claimed Active 4m24s
  4. blockdevice-77f834edba45b03318d9de5b79af0734 gke-ranjith-cspc-default-pool-f7a78720-k1cr 42949672960 Claimed Active 7h47m
  5. blockdevice-936911c5c9b0218ed59e64009cc83c8f gke-ranjith-cspc-default-pool-f7a78720-9436 42949672960 Claimed Active 7h47m

Blockdevice replacement in a cStor pool created using CSPC operator

The following steps will help to perform the blockdevice replacement in cStor pools which were created using CSPC. It is recommended to perform replacement of one blockdevice per raid group of the cStor pool. For example, If cStor pool is created using 2 mirror raid groups, then only one blockdevice can be replaced per raid group. Following are prerequisites to perform replacement of blockdevice

Prerequisites:

  1. There should be a blockdevice present on the same node where the old blockdevice is attached. The available blockdevice should be in Unclaimed & ‘Active’ state and does not contain any filesystem or partition.

  2. The capacity of the new blockdevice should be greater than or equal to that of old blockdevice.

Note: Blockdevice replacement is not supported on cStor pool created using stripe configuration.

The following are the steps to perform blockdevice replacement:

  • Verify the status of cStor pool instance using the following command:

    1. kubectl get cspi -n openebs

    Example output:

    NAME HOSTNAME ALLOCATED FREE CAPACITY STATUS AGE cstor-pool-mirror-6gdq gke-ranjith-csi-default-pool-6643eeb8-77f7 506K 39.7G 39.8G ONLINE 103m

  • Verify the details of cStor pool cluster configuration using the following command:

    1. kubectl get cspc <CSPC name> -n <openebs_namespace> -o yaml

    Example command:

    1. kubectl get cspc -n openebs

    Example output:

    NAME AGE cstor-pool-mirror 106m

  • Obtain the details of used blockdevices and available blockdevices using the following command:

    1. kubectl get bd -n openebs

    Example output:

    NAME NODENAME SIZE CLAIMSTATE STATUS AGE blockdevice-070383c017742c82d14103a2d2047c0f gke-ranjith-csi-default-pool-6643eeb8-77f7 42949672960 Claimed Active 122m blockdevice-41d4416f6199a14b706e2ced69e2694a gke-ranjith-csi-default-pool-6643eeb8-77f7 42949672960 Claimed Active 122m blockdevice-c0179d93aebfd90c09a7a864a9148f85 gke-ranjith-csi-default-pool-6643eeb8-77f7 42949672960 Unclaimed Active 122m blockdevice-ef810d7dfc8e4507359963fab7e9647e gke-ranjith-csi-default-pool-6643eeb8-77f7 53687091200 Unclaimed Active 122m

    In this example, blockdevices `blockdevice-070383c017742c82d14103a2d2047c0f` and `blockdevice-41d4416f6199a14b706e2ced69e2694a` are used for the pool creation configuration `cstor-pool-mirror` mentioned in previous step . Both the blockdevices are attached to the same node. Also, there are 2 available blockdevice CRs present on the same node with `Unclaimed` state. These 2 blockdevices satisfied all the prerequisites conditions and can be used to replace any of the above mentioned used blockdevice with a recommended method that, only one blockdevice per raid group can be replaced at a time.

  • Get the details of cStor pool configuration using the following command:

    1. kubectl get cspc <CSPC name> -n openebs -o yaml

    Example command:

    1. kubectl get cspc cstor-pool-mirror -n openebs -o yaml

    Example output shows the details of the selected CSPC cluster and used blockdevices.

    spec: auxResources: {} pools: - nodeSelector: kubernetes.io/hostname: gke-ranjith-csi-default-pool-6643eeb8-77f7 poolConfig: cacheFile: “” compression: “off” defaultRaidGroupType: mirror overProvisioning: false resources: null raidGroups: - blockDevices: - blockDeviceName: blockdevice-070383c017742c82d14103a2d2047c0f capacity: “” devLink: “” - blockDeviceName: blockdevice-41d4416f6199a14b706e2ced69e2694a capacity: “” devLink: “” isReadCache: false isSpare: false isWriteCache: false type: mirror

  • Replace the selected blockdevice from the RAID group by editing the corresponding CSPC configuration. Only requirement is that, one blockdevice per raid group can be replaced at a time. The following command will edit the CSPC configuration and user can replace one blockdevice in the corresponding raid group by replacing with new blockdevice.

    1. kubectl edit cspc <CSPC name> -n openebs

    Example command:

    1. kubectl edit cspc cstor-pool-mirror -n openebs

    After replacing the required blockdevice with new one, ensure that CSPC configuration is applied successfully with new blockdevice using the following command:

    1. kubectl get cspi <CSPI-name> -n openebs -o yaml

    Example command:

    1. kubectl get cspi cstor-pool-mirror-6gdq -n openebs -o yaml

    The following output shows the snippet of used blockdevice section.

    spec: auxResources: {} hostName: gke-ranjith-csi-default-pool-6643eeb8-77f7 nodeSelector: kubernetes.io/hostname: gke-ranjith-csi-default-pool-6643eeb8-77f7 poolConfig: cacheFile: “” compression: “off” defaultRaidGroupType: mirror overProvisioning: false resources: null raidGroup: - blockDevices: - blockDeviceName: blockdevice-070383c017742c82d14103a2d2047c0f capacity: “” devLink: /dev/disk/by-id/scsi-0Google_PersistentDisk_ranjith-mirror1 - blockDeviceName: blockdevice-c0179d93aebfd90c09a7a864a9148f85 capacity: “” devLink: /dev/disk/by-id/scsi-0Google_PersistentDisk_ranjith-mirror4 isReadCache: false isSpare: false isWriteCache: false type: mirror

    It shows that the details of new blockdevice is replaced with old one.

  • Verify if blockdevice replacement is successful by using the following command:

    1. kubectl get bd -n openebs

    Example output:

    NAME NODENAME SIZE CLAIMSTATE STATUS AGE blockdevice-070383c017742c82d14103a2d2047c0f gke-ranjith-csi-default-pool-6643eeb8-77f7 42949672960 Claimed Active 147m blockdevice-41d4416f6199a14b706e2ced69e2694a gke-ranjith-csi-default-pool-6643eeb8-77f7 42949672960 Unclaimed Active 147m blockdevice-c0179d93aebfd90c09a7a864a9148f85 gke-ranjith-csi-default-pool-6643eeb8-77f78 42949672960 Claimed Active 147m blockdevice-ef810d7dfc8e4507359963fab7e9647e gke-ranjith-csi-default-pool-6643eeb8-77f7 53687091200 Unclaimed Active 147m

    In the above example output, blockdevice-41d4416f6199a14b706e2ced69e2694a is replaced successfully with blockdevice-c0179d93aebfd90c09a7a864a9148f85.

    If resilvering on the pool is completed , state of old blockdevice will be changed to Unclaimed state and new blockdevice will be Claimed. In future, different verification methods will be added.

Run a sample application on Jiva volume provisioned via Jiva CSI Provisioner

OpenEBS Jiva volumes can now be provisioned with CSI driver from OpenEBS 1.5 version onwards.

Note: The current implementation only supports provisioning and de-provisioning of Jiva Volumes. This feature is under active development and considered to be in Alpha state.

Prerequisites:

  • Kubernetes version 1.14 or higher
  • OpenEBS Version 1.5 or higher installed. Recommended OpenEBS version is 1.8.
  • iSCSI initiator utils installed on all the worker nodes.
  • You have access to install RBAC components into kube-system namespace. The Jiva CSI driver components are installed in kube-system namespace to allow them to be flagged as system critical components.
  • Base OS on worker nodes can be Ubuntu 16.04, Ubuntu 18.04 or CentOS.

Overview

  • Install OpenEBS.
  • Install Jiva operator
  • Install Jiva CSI Driver
  • Create a Storage Class with Jiva CSI provisioner
  • Provision sample application using a PVC spec which uses SC with Jiva CSI provisioner

Install OpenEBS

Latest OpenEBS version can be installed using the following command:

  1. kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.8.0.yaml

Verify if OpenEBS pods are in Running state using the following command:

  1. kubectl get pod -n openebs

Example output:

NAME READY STATUS RESTARTS AGE maya-apiserver-77f9cc9f9b-jg825 1/1 Running 3 90s openebs-admission-server-8c5b8565-d2q58 1/1 Running 0 79s openebs-localpv-provisioner-f458bc8c4-bjmkq 1/1 Running 0 78s openebs-ndm-lz4n6 1/1 Running 0 80s openebs-ndm-operator-7d7c9d966d-bqlnj 1/1 Running 1 79s openebs-ndm-spm7f 1/1 Running 0 80s openebs-ndm-tm8ff 1/1 Running 0 80s openebs-provisioner-5fbd8fc74c-6zcnq 1/1 Running 0 82s openebs-snapshot-operator-7d6dd4b77f-444zh 2/2 Running 0 81s

Install Jiva operator

Install Jiva operator using the following command:

  1. kubectl create -f https://raw.githubusercontent.com/openebs/jiva-operator/master/deploy/operator.yaml

Verify the status of Jiva operator using the following command:

  1. kubectl get pod -n openebs

Example output:

jiva-operator-7765cbfffd-vt787 1/1 Running 0 10s maya-apiserver-77f9cc9f9b-jg825 1/1 Running 3 90s openebs-admission-server-8c5b8565-d2q58 1/1 Running 0 79s openebs-localpv-provisioner-f458bc8c4-bjmkq 1/1 Running 0 78s openebs-ndm-lz4n6 1/1 Running 0 80s openebs-ndm-operator-7d7c9d966d-bqlnj 1/1 Running 1 79s openebs-ndm-spm7f 1/1 Running 0 80s openebs-ndm-tm8ff 1/1 Running 0 80s openebs-provisioner-5fbd8fc74c-6zcnq 1/1 Running 0 82s openebs-snapshot-operator-7d6dd4b77f-444zh 2/2 Running 0 81s

Install Jiva CSI Driver

The node components make use of the host iSCSI binaries for iSCSI connection management. Depending on the OS, the Jiva operator will have to be modified to load the required iSCSI files into the node pods.

OpenEBS Jiva CSI driver components can be installed by running the following command.

Depending on the base OS, select the appropriate deployment file.

For Ubuntu 16.04 and CentOS:

  1. kubectl apply -f https://raw.githubusercontent.com/openebs/jiva-csi/master/deploy/jiva-csi-ubuntu-16.04.yaml

For Ubuntu 18.04:

  1. kubectl apply -f https://raw.githubusercontent.com/openebs/jiva-csi/master/deploy/jiva-csi.yaml

Verify if Jiva CSI Components are installed:

  1. kubectl get pods -n kube-system -l role=openebs-jiva-csi

Example output:

NAME READY STATUS RESTARTS AGE openebs-jiva-csi-controller-0 4/4 Running 0 6m14s openebs-jiva-csi-node-56t5g 2/2 Running 0 6m13s

Create a Storage Class with Jiva CSI provisioner

Create a Storage Class to dynamically provision volumes using Jiva CSI provisioner. You can save the following sample StorageClass YAML spec as jiva-csi-sc.yaml.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-csi-sc
  5. provisioner: jiva.csi.openebs.io
  6. parameters:
  7. cas-type: "jiva"
  8. replicaCount: "1"
  9. replicaSC: "openebs-hostpath"

Create Storage Class using the above YAML using the following command:

  1. kubectl apply -f jiva-csi-sc.yaml

Provision a sample application using Jiva StorageClass

Create PVC by specifying the above Storage Class in the PVC spec. The following is a sample PVC spec which uses the above created Storage Class. In this example, the PVC YAML spec is saved as jiva-csi-demo-pvc.yaml.

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: minio-pv-claim
  5. labels:
  6. app: minio-storage-claim
  7. spec:
  8. storageClassName: openebs-jiva-csi-sc
  9. accessModes:
  10. - ReadWriteOnce
  11. resources:
  12. requests:
  13. storage: 7Gi

Now, apply the PVC using the following command:

  1. kubectl apply -f jiva-csi-demo-pvc.yaml

Now, deploy your application by specifying the PVC name. The following is a sample application spec which uses the above PVC. In this example, the application YAML file is saved as jiva-csi-demo-app.yaml.

  1. apiVersion: extensions/v1beta1
  2. kind: Deployment
  3. metadata:
  4. # This name uniquely identifies the Deployment
  5. name: minio-deployment
  6. spec:
  7. strategy:
  8. type: Recreate
  9. template:
  10. metadata:
  11. labels:
  12. # Label is used as selector in the service.
  13. app: minio
  14. spec:
  15. # Refer to the PVC created earlier
  16. volumes:
  17. - name: storage
  18. persistentVolumeClaim:
  19. # Name of the PVC created earlier
  20. claimName: minio-pv-claim
  21. containers:
  22. - name: minio
  23. # Pulls the default Minio image from Docker Hub
  24. image: minio/minio
  25. args:
  26. - server
  27. - /storage
  28. env:
  29. # Minio access key and secret key
  30. - name: MINIO_ACCESS_KEY
  31. value: "minio"
  32. - name: MINIO_SECRET_KEY
  33. value: "minio123"
  34. ports:
  35. - containerPort: 9000
  36. # Mount the volume into the pod
  37. volumeMounts:
  38. - name: storage # must match the volume name, above
  39. mountPath: "/storage"

Now, apply the application using the following command:

  1. kubectl apply -f jiva-csi-demo-app.yaml

Now, deploy the service related to the application. In this example, the service YAML file is saved as jiva-csi-demo--app-service.yaml.

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: minio-service
  5. labels:
  6. app: minio
  7. spec:
  8. ports:
  9. - port: 9000
  10. nodePort: 32701
  11. protocol: TCP
  12. selector:
  13. app: minio
  14. sessionAffinity: None
  15. type: NodePort

Apply the service using the above YAML spec.

  1. kubectl apply -f jiva-csi-demo--app-service.yaml

Verify if application pod is created successfully using the following command:

  1. kubectl get pod -n <namespace>

Example output:

NAME READY STATUS RESTARTS AGE minio-deployment-7c4ccff854-flt8c 1/1 Running 0 80s

Verify if PVC is created successfully using the following command:

  1. kubectl get pvc -n <namespace>

Example output:

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE minio-pv-claim Bound pvc-0053ef2d-2919-47ea-aeaf-9f1cbd915bae 7Gi RWO openebs-jiva-csi-sc 11s

Verify if PV is created successfully using the following command:

  1. kubectl get pv

Example output:

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0053ef2d-2919-47ea-aeaf-9f1cbd915bae 7Gi RWO Delete Bound default/minio-pv-claim openebs-jiva-csi-sc 17s pvc-fb21eb55-23ce-4547-922c-44780f2c4c2f 7Gi RWO Delete Bound openebs/openebs-pvc-0053ef2d-2919-47ea-aeaf-9f1cbd915bae-jiva-rep-0 openebs-hostpath 11s

In the above output, 2 PVs are created. As the replicaSC specified in storage class spec, Jiva replica will consume local PV created using the Storage Class openebs-hostpath. Based on the required Jiva replica count, the number of local PVs will be provisioned. In this example, PV pvc-fb21eb55-23ce-4547-922c-44780f2c4c2f is dedicated for Jiva volume replica. If you specifiy replicaCount as 3, then a total of 4 PVs will be created.

See Also:

cStor Concepts

cStor User Guide