Jiva User Guide

OpenEBS configuration flow

For details of how Jiva works, see Jiva overview page

Jiva is a light weight storage engine that is recommended to use for low capacity workloads. The snapshot and storage management features of the other cStor engine are more advanced and is recommended when snapshots are a need.

User operations

Simple provisioning of Jiva

Provisioning with Local or Cloud Disks

Provision Sample Application with Jiva Volume

Monitoring a Jiva Volume

Backup and Restore

Admin operations

Create a Jiva Pool

Create a StorageClass

Setting up Jiva Storage Policies

User Operations

Simple Provisioning of Jiva

To quickly provision a Jiva volume using the default pool and StorageClass, use the following command

  1. kubectl apply -f https://raw.githubusercontent.com/openebs/openebs/master/k8s/demo/pvc-standard-jiva-default.yaml

In this mode, OpenEBS provisions a Jiva volume with three replicas on three different nodes. Ensure that there are 3 Nodes in the cluster. The data in each replica is stored in the local container storage of that replica itself. The data is replicated and highly available and is suitable for quick testing of OpenEBS and simple application PoCs. If it is single node cluster, then download the above YAML spec and change the replica count accordingly and apply the modified YAML spec.

Provisioning with Local or Cloud Disks

In this mode, local disks on each node has to be formatted and mounted at a directory path. The steps for mounting a disk into a node and creating a Jiva storage pool is provided here. Then, storage class has to be created by specifying this StoragePool name. You can use this StorageClass in PVC configuration.

Provision Sample Applications with Jiva Volume

1.Percona

Here we illustrate the usage of default Jiva storage class. In the following example manifest, the default storage class openebs-jiva-default is specified in PersistentVolumeClaim specification. So, the Jiva volume will be created with 3 replicas adhering to the default configuration. The manifest for deploying Percona can be downloaded from here or use the following spec.

  • Percona spec

    1. ---
    2. apiVersion: apps/v1beta1
    3. kind: Deployment
    4. metadata:
    5. name: percona
    6. labels:
    7. name: percona
    8. spec:
    9. replicas: 1
    10. selector:
    11. matchLabels:
    12. name: percona
    13. template:
    14. metadata:
    15. labels:
    16. name: percona
    17. spec:
    18. securityContext:
    19. fsGroup: 999
    20. tolerations:
    21. - key: "ak"
    22. value: "av"
    23. operator: "Equal"
    24. effect: "NoSchedule"
    25. containers:
    26. - resources:
    27. limits:
    28. cpu: 0.5
    29. name: percona
    30. image: percona
    31. args:
    32. - "--ignore-db-dir"
    33. - "lost+found"
    34. env:
    35. - name: MYSQL_ROOT_PASSWORD
    36. value: k8sDem0
    37. ports:
    38. - containerPort: 3306
    39. name: percona
    40. volumeMounts:
    41. - mountPath: /var/lib/mysql
    42. name: demo-vol1
    43. volumes:
    44. - name: demo-vol1
    45. persistentVolumeClaim:
    46. claimName: demo-vol1-claim
    47. ---
    48. kind: PersistentVolumeClaim
    49. apiVersion: v1
    50. metadata:
    51. name: demo-vol1-claim
    52. spec:
    53. storageClassName: openebs-jiva-default
    54. accessModes:
    55. - ReadWriteOnce
    56. resources:
    57. requests:
    58. storage: 5G
    59. ---
    60. apiVersion: v1
    61. kind: Service
    62. metadata:
    63. name: percona-mysql
    64. labels:
    65. name: percona-mysql
    66. spec:
    67. ports:
    68. - port: 3306
    69. targetPort: 3306
    70. selector:
    71. name: percona
  • Run the application using following command.

    1. kubectl apply -f demo-percona-mysql-pvc.yaml

    Now, Percona application runs inside jiva default storage pool.

2.Busybox

Before provisioning the application ensure that all the below mentioned steps are carried out:

  1. Ensure that the filesystem is mounted as per requirement. To know more about mount status click here.
  2. First, You need to Create a Jiva Pool specifying the filesystem path on each node. To know about the detailed steps click here.
  3. Using this storage pool, create a storage class by referring here.
  4. Once all the above actions have been successfully executed, You can deploy Busybox with Jiva volume as follows:
    Copy the below spec into a file, say demo-busybox-jiva.yaml and update storageClassName to openebs-jiva-gpd-3repl.

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: busybox
    5. labels:
    6. app: busybox
    7. spec:
    8. replicas: 1
    9. strategy:
    10. type: RollingUpdate
    11. selector:
    12. matchLabels:
    13. app: busybox
    14. template:
    15. metadata:
    16. labels:
    17. app: busybox
    18. spec:
    19. containers:
    20. - resources:
    21. limits:
    22. cpu: 0.5
    23. name: busybox
    24. image: busybox
    25. command: ['sh', '-c', 'echo Container 1 is Running ; sleep 3600']
    26. imagePullPolicy: IfNotPresent
    27. ports:
    28. - containerPort: 3306
    29. name: busybox
    30. volumeMounts:
    31. - mountPath: /var/lib/mysql
    32. name: demo-vol1
    33. volumes:
    34. - name: demo-vol1
    35. persistentVolumeClaim:
    36. claimName: demo-vol1-claim
    37. ---
    38. kind: PersistentVolumeClaim
    39. apiVersion: v1
    40. metadata:
    41. name: demo-vol1-claim
    42. spec:
    43. storageClassName: openebs-jiva-gpd-3repl
    44. accessModes:
    45. - ReadWriteOnce
    46. resources:
    47. requests:
    48. storage: 5G
    49. ---
    50. apiVersion: v1
    51. kind: Service
    52. metadata:
    53. name: busybox-mysql
    54. labels:
    55. name: busybox-mysql
    56. spec:
    57. ports:
    58. - port: 3306
    59. targetPort: 3306
    60. selector:
    61. name: busybox

    To deploy busybox, execute

    1. kubectl apply -f demo-busybox-jiva.yaml
  5. To verify whether the application is successfully deployed, execute the following command:

    1. kubectl get pods

    The application pods should be running as displayed below

    1. NAME READY STATUS RESTARTS AGE
    2. busybox-66db7d9b88-kkktl 1/1 Running 0 2m16s

Monitoring a Jiva Volume

By default VolumeMonitor is set to ON in the JIva StorageClass. Volume metrics are exported when this parameter is set to ON. Following metrics are supported by Jiva as of the current release.

  1. openebs_actual_used # Actual volume size used
  2. openebs_connection_error_total # Total no of connection errors
  3. openebs_connection_retry_total # Total no of connection retry requests
  4. openebs_degraded_replica_count # Total no of degraded/ro replicas
  5. openebs_healthy_replica_count # Total no of healthy replicas
  6. openebs_logical_size # Logical size of volume
  7. openebs_parse_error_total # Total no of parsing errors
  8. openebs_read_block_count # Read Block count of volume
  9. openebs_read_time # Read time on volume
  10. openebs_reads # Read Input/Outputs on Volume
  11. openebs_sector_size # sector size of volume
  12. openebs_size_of_volume # Size of the volume requested
  13. openebs_total_replica_count # Total no of replicas connected to cas
  14. openebs_volume_status # Status of volume: (1, 2, 3, 4) = {Offline, Degraded, Healthy, Unknown}
  15. openebs_volume_uptime # Time since volume has registered
  16. openebs_write_block_count # Write Block count of volume
  17. openebs_write_time # Write time on volume
  18. openebs_writes # Write Input/Outputs on Volume

Grafana charts can be built for the above Prometheus metrics.

Backup and Restore

OpenEBS volume can be backed up and restore along with application using velero plugin. It helps the user for taking backup of OpenEBS volumes to a third party storage location and then restoration of the data whenever it needed. The steps for taking backup and restore are following.

Prerequisites

  • Mount propagation feature has to be enabled on Kubernetes, otherwise the data written from the pods will not visible in the restic daemonset pod on the same node. It is enabled by default on Kubernetes version 1.12. More details can be get from here.
  • Latest tested Velero version is 1.4.0.
  • Create required storage provider configuration to store the backup data.
  • Create required storage class on destination cluster.
  • Annotate required application pod that contains a volume to back up.

Overview

Velero is a utility to back up and restore your Kubernetes resource and persistent volumes.

To take backup and restore of Jiva volume, configure Velero with restic and use velero backup command to take the backup of application with OpenEBS Jiva volume which invokes restic internally and copies the data from the given application including the entire data from the associated persistent volumes in that application and backs it up to the configured storage location such as S3 or Minio.

The following are the step by step procedure for taking backup and restore of application with Jiva.

  1. Install Velero
  2. Annotate Application Pod
  3. Creating and Managing Backups
  4. Steps for Restore

Install Velero (Formerly known as ARK)

Follow the instructions at Velero documentation to install and configure Velero and follow restic integration documentation for setting up and usage of restic support.

While installing Velero plugin in your cluster, specify --use-restic to enable restic support.

Verify using the following command if restic pod and Velero pod are running after installing velero with restic support.

  1. kubectl get pod -n velero

The following is an example output in a 3 Node cluster.

  1. NAME READY STATUS RESTARTS AGE
  2. restic-8hxx8 1/1 Running 0 9s
  3. restic-nd9d9 1/1 Running 0 9s
  4. restic-zfggm 1/1 Running 0 9s
  5. velero-db6459bb-n2rff 1/1 Running 0 9s

Annotate Application Pod

Run the following to annotate each application pod that contains a volume to back up.

  1. kubectl -n YOUR_POD_NAMESPACE annotate pod/YOUR_POD_NAME backup.velero.io/backup-volumes=YOUR_VOLUME_NAME_1,YOUR_VOLUME_NAME_2,...

In the above example command, where the volume names are the names of the volumes specified in the application pod spec.

Example Spec:

If application spec contains the volume name as mentioned below, then use volume name as demo-vol1 in the below command.

  1. volumeMounts:
  2. - mountPath: /var/lib/mysql
  3. name: demo-vol1
  4. volumes:
  5. - name: demo-vol1
  6. persistentVolumeClaim:
  7. claimName: demo-vol1-claim

And if the application pod name is percona-7b64956695-dk95r , use the following command to annotate the application.

  1. kubectl -n default annotate pod/percona-7b64956695-dk95r backup.velero.io/backup-volumes=demo-vol1

Creating and Managing Backups

Take the backup using the below command. Here you should add the selector for avoiding Jiva controller and replica deployment from taking backup.

  1. velero backup create <backup_name> --selector '!openebs.io/controller,!openebs.io/replica'

Example:

  1. velero backup create hostpathbkp2 --selector '!openebs.io/controller,!openebs.io/replica'

After taking backup, verify if backup is taken successfully by using following command.

  1. velero backup get

The following is a sample output.

  1. NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR
  2. hostpathbkp2 Completed 2019-06-19 17:14:43 +0530 IST 29d default !openebs.io/controller,!openebs.io/replica

You will get more details about the backup using the following command.

  1. velero backup describe hostpathbkp2 --details

Once the backup is completed you should see the Phase marked as Completed in the output of above command.

Steps for Restore

Velero backup can be restored onto a new cluster or to the same cluster. An OpenEBS PVC with the same name as the original PVC will be created and application will run using the restored OpenEBS volume.

Prerequisites

  • Create the same namespace and StorageClass configuration of the source PVC in your destination cluster.
  • If the restoration is happens on same cluster where Source PVC was created, then ensure that application and its corresponding components such as Service, PVC and PV are deleted successfully.

On the target cluster, restore the application using the below command.

  1. velero restore create --from-backup <backup-name>

Example:

  1. velero restore create --from-backup hostpathbkp2

The following can be used to obtain the restore job status.

  1. velero restore get

The following is an example output. Once the restore is completed you should see the status marked as Completed.

  1. NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR
  2. hostpathbkp2-20190619171932 hostpathbkp2 Completed 44 0 2019-06-19 17:19:33 +0530 IST <none>

Verify application status using the following command.

  1. kubectl get pod -n <namespace>

Verify PVC status using the following command.

  1. kubectl get pvc -n <namespace>

Verify PV status using the following command.

  1. kubectl get pv

Admin Operations

Create a Jiva Pool

The process of creating a Jiva pool include the following steps.

  1. Prepare disks and mount them

  2. Create a Jiva pool using the above mounted disk.

Prepare disks and mount them

If it is a cloud disk provision and mount on the node. If three replicas of Jiva volume are needed, provision three cloud disks and mount them on each node. The mount path needs to be same on all three nodes. The following is the steps for creating a GPD disk on Google cloud and mounthing to the node.

  • Create a GPD

    1. gcloud compute disks create disk1 --size 100GB --type pd-standard --zone us-central1-a
  • Attach the GPD to a node

    1. gcloud compute instances attach-disk <Node Name> --disk disk1 --zone us-central1-a
  • If the disk attached is mapped to /dev/sdb, verify the size, mount the disk and format it

    1. sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
    2. sudo mkfs.ext4 /dev/<device-name>
    3. sudo mkdir /home/openebs-gpd
    4. sudo mount /dev/sdb /home/openebs-gpd
  • Repeat the above steps on other two nodes if this is a three replica case.

Create a Jiva Pool using the mounted disk

Jiva pool requires mount path to be prepared and available on the Node. Note that if the mount path is not pointing a real disk, then a local directory is created with this mount path and the replica data goes to the container image disk (similar to the case of default pool).

  • YAML specification to create the Jiva pool is shown below

    1. apiVersion: openebs.io/v1alpha1
    2. kind: StoragePool
    3. metadata:
    4. name: gpdpool
    5. type: hostdir
    6. spec:
    7. path: "/home/openebs-gpd"
  • Copy the above content to the into a file called jiva-gpd-pool.yaml and create the pool using the following command.

    1. kubectl apply -f jiva-gpd-pool.yaml
  • Verify if the pool is created using the following command

    1. kubectl get storagepool

Create a StorageClass

This StorageClass is mainly for using the Jiva Storagepool created with a mounted disk. Jiva volume can be provision using default Jiva StorageClass named openebs-jiva-default in the corresponding PVC spec. The default StorageClass has replica count as 3. The steps for creating Jiva Storage pool is mentioned in the above section. Specify the Jiva pool in the StoragePool annotation of StorageClass. Example StorageClass specification is given below.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-gpd-3repl
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: ReplicaCount
  9. value: "3"
  10. - name: StoragePool
  11. value: gpdpool
  12. provisioner: openebs.io/provisioner-iscsi
  • Copy the above content to the into a file called jiva-gpd-3repl-sc.yaml and create the pool using the following command

    1. kubectl apply -f jiva-gpd-3repl-sc.yaml
  • Verify if the StorageClass is created using the following command

    1. kubectl get sc

Setting up Jiva Storage Policies

Below table lists the storage policies supported by Jiva. These policies can be added into StorageClass and apply them through PersistentVolumeClaim or VolumeClaimTemplates interface.

JIVA STORAGE POLICYMANDATORYDEFAULTPURPOSE
ReplicaCountNo3Defines the number of Jiva volume replicas
Replica Imagequay.io/openebs/m-apiserver:1.12.0To use particular Jiva replica image
ControllerImagequay.io/openebs/jiva:1.12.0To use particular Jiva Controller Image
StoragePoolYesdefaultA storage pool provides a persistent path for an OpenEBS volume. It can be a directory on host OS or externally mounted disk.
VolumeMonitorONWhen ON, a volume exporter sidecar is launched to export Prometheus metrics.
VolumeMonitorImagequay.io/openebs/m-exporter:1.12.0Used when VolumeMonitor is ON. A dedicated metrics exporter to the workload. Can be used to apply a specific issue or feature for the workload
Volume FSTypeext4Specifies the filesystem that the volume should be formatted with. Other values are xfs
Volume Space ReclaimfalseIt will specify whether data need to be retained post PVC deletion.
TargetNodeSelectorDecided by Kubernetes schedulerSpecify the label in key: value format to notify Kubernetes scheduler to schedule Jiva target pod on the nodes that match label.
Replica NodeSelectorDecided by Kubernetes schedulerSpecify the label in key: value format to notify Kubernetes scheduler to schedule Jiva replica pods on the nodes that match label.
TargetTolerationsDecided by Kubernetes schedulerConfiguring the tolerations for Jiva Target pod.
ReplicaTolerationsDecided by Kubernetes schedulerConfiguring the tolerations for Jiva Replica pods.
TargetResourceLimitsDecided by Kubernetes schedulerCPU and Memory limits to Jiva Target pod
TargetResourceRequestsDecided by Kubernetes schedulerConfiguring resource requests that need to be available before scheduling the containers.
AuxResourceLimitsDecided by Kubernetes schedulerconfiguring resource limits on the target pod.
AuxResourceRequestsDecided by Kubernetes schedulerConfigure minimum requests like ephemeral storage to avoid erroneous eviction by K8s.
ReplicaResourceRequestsDecided by Kubernetes schedulerConfiguring resource requests that need to be available to the Replica.
ReplicaResourceLimitsDecided by Kubernetes schedulerAllow you to specify resource limits for the Replica.
Target AffinityDecided by Kubernetes schedulerThe policy specifies the label key: value pair to be used both on the Jiva target and on the application being used so that application pod and Jiva target pod are scheduled on the same node.
OpenEBS Namespace Policy for Jiva PodsfalseJiva Pod will be deployed in PVC name space by default. With the value as true, Jiva Pods will run in OpenEBS namespace.

Replica Count Policy

You can specify the Jiva replica count using the value for ReplicaCount property. In the following example, the jiva-replica-count is specified as 3. Hence, three replicas are created.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-default
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: ReplicaCount
  9. value: "3"
  10. provisioner: openebs.io/provisioner-iscsi

Replica Image Policy

You can specify the jiva replica image using value for ReplicaImage property.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-default
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: ReplicaImage
  9. value: quay.io/openebs/m-apiserver:1.12.0
  10. provisioner: openebs.io/provisioner-iscsi

Controller Image Policy

You can specify the jiva controller image using the value for ControllerImage property.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-default
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: ControllerImage
  9. value: quay.io/openebs/jiva:1.12.0
  10. provisioner: openebs.io/provisioner-iscsi

Volume Monitor Policy

You can specify the jiva volume monitor feature which can be set using value for VolumeMonitor property.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-default
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - enabled: "true"
  9. name: VolumeMonitor
  10. provisioner: openebs.io/provisioner-iscsi

Storage Pool Policy

A storage pool provides a persistent path for an OpenEBS volume. It can be a directory on any of the following.

  • host-os or
  • mounted disk

Note:

You must define the storage pool as a Kubernetes Custom Resource (CR) before using it as a Storage Pool policy. Following is a sample Kubernetes custom resource definition for a storage pool.

  1. apiVersion: openebs.io/v1alpha1
  2. kind: StoragePool
  3. metadata:
  4. name: default
  5. type: hostdir
  6. spec:
  7. path: "/mnt/openebs"

This storage pool custom resource can now be used as follows in the storage class.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-default
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: StoragePool
  9. value: default
  10. provisioner: openebs.io/provisioner-iscsi

Volume File System Type Policy

You can specify a storage class policy where you can specify the file system type. By default, OpenEBS comes with ext4 file system. However, you can also use the xfs file system.

Following is a sample setting.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-mongodb
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: FSType
  9. value: "xfs"
  10. provisioner: openebs.io/provisioner-iscsi

Volume Monitoring Image Policy

You can specify the monitoring image policy for a particular volume using value for VolumeMonitorImageproperty. The following Kubernetes storage class sample uses the Volume Monitoring policy. By default, volume monitor is enabled.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-default
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: VolumeMonitorImage
  9. value: quay.io/openebs/m-exporter:1.12.0
  10. provisioner: openebs.io/provisioner-iscsi

Volume Space Reclaim Policy

Support for a storage policy that can disable the Jiva Volume Space reclaim. You can specify the jiva volume space reclaim feature setting using the value for RetainReplicaData property. If User would like to disable Jiva Volume Space reclaim (or in other words - retain the volume data post PVC deletion), set RetainReplicaData as true. RetainReplicaData specifies whether Jiva replica data folder should be cleared or retained. In the following example, the Jiva volume space reclaim feature is disabled. Hence, volume data will be retained post PVC deletion.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-jiva-default
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: RetainReplicaData
  9. enabled: true
  10. provisioner: openebs.io/provisioner-iscsi

Target NodeSelector Policy

You can specify the TargetNodeSelector where Target pod has to be scheduled using the value for TargetNodeSelector. In following example, node: apnodeis the node label.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: TargetNodeSelector
  7. value: |-
  8. node: appnode
  9. openebs.io/cas-type: jiva
  10. provisioner: openebs.io/provisioner-iscsi

Replica NodeSelector Policy

You can specify the ReplicaNodeSelector where replica pods has to be scheduled using the value for ReplicaNodeSelector . In following sample storage class yaml, node: openebs is the node label.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: ReplicaNodeSelector
  7. value: |-
  8. node: openebs
  9. openebs.io/cas-type: jiva
  10. provisioner: openebs.io/provisioner-iscsi

TargetTolerations Policy

You can specify the TargetTolerations to specify the tolerations for Jiva target.

  1. - name: TargetTolerations
  2. value: |-
  3. t1:
  4. key: "key1"
  5. operator: "Equal"
  6. value: "value1"
  7. effect: "NoSchedule"
  8. t2:
  9. key: "key1"
  10. operator: "Equal"
  11. value: "value1"
  12. effect: "NoExecute"

ReplicaTolerations Policy

You can specify the ReplicaTolerations to specify the tolerations for Replica.

  1. - name: ReplicaTolerations
  2. value: |-
  3. t1:
  4. key: "key1"
  5. operator: "Equal"
  6. value: "value1"
  7. effect: "NoSchedule"
  8. t2:
  9. key: "key1"
  10. operator: "Equal"
  11. value: "value1"
  12. effect: "NoExecute"

TargetResourceRequests Policy

You can specify the TargetResourceRequests to specify resource requests that need to be available before scheduling the containers.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: TargetResourceRequests
  7. value: |-
  8. memory: 1Gi
  9. cpu: 200m
  10. ephemeral-storage: "100Mi"
  11. openebs.io/cas-type: jiva
  12. provisioner: openebs.io/provisioner-iscsi

Target ResourceLimits Policy

You can specify the TargetResourceLimits to restrict the memory and cpu usage of Jiva target pod within the given limit using the value for TargetResourceLimits .

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: TargetResourceLimits
  7. value: |-
  8. memory: 1Gi
  9. cpu: 200m
  10. openebs.io/cas-type: jiva
  11. provisioner: openebs.io/provisioner-iscsi

AuxResourceLimits Policy

You can specify the AuxResourceLimits which allow you to set limits on side cars.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: AuxResourceLimits
  7. value: |-
  8. memory: 1Gi
  9. cpu: 100m
  10. openebs.io/cas-type: jiva
  11. provisioner: openebs.io/provisioner-iscsi

AuxResourceRequests Policy

This feature is useful in cases where user has to specify minimum requests like ephemeral storage etc. to avoid erroneous eviction by K8s. AuxResourceRequests allow you to set requests on side cars. Requests have to be specified in the format expected by Kubernetes.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: AuxResourceRequests
  7. value: |-
  8. memory: 0.5Gi
  9. cpu: 50m
  10. ephemeral-storage: "50Mi"
  11. openebs.io/cas-type: jiva
  12. provisioner: openebs.io/provisioner-iscsi

ReplicaResourceRequests Policy

You can specify the ReplicaResourceRequests to requests the resource requirements of replica pod by specifying memory, CPU and ephemeral-storage values.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: ReplicaResourceRequests
  7. value: |-
  8. memory: 1Gi
  9. cpu: 200m
  10. ephemeral-storage: "100Mi"
  11. openebs.io/cas-type: jiva
  12. provisioner: openebs.io/provisioner-iscsi

ReplicaResourceLimits Policy

You can specify the ReplicaResourceLimits to restrict the memory usage of replica pod within the given limit using the value for ReplicaResourceLimits.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. annotations:
  5. cas.openebs.io/config: |
  6. - name: ReplicaResourceLimits
  7. value: |-
  8. memory: 2Gi
  9. openebs.io/cas-type: jiva
  10. provisioner: openebs.io/provisioner-iscsi

Target Affinity Policy

The Stateful workloads access the OpenEBS storage volume by connecting to the Volume Target Pod. This policy can be used to co-locate volume target pod on the same node as workload.

  • This feature makes use of the Kubernetes Pod Affinity feature that is dependent on the Pod labels. User will need to add the following label to both Application and PVC.

    1. labels:
    2. openebs.io/target-affinity: <application-unique-label>
  • You can specify the Target Affinity in both application and OpenEBS PVC using the following way.

    The following is a snippet of an application deployment YAML spec for implementing target affinity.

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: fio-jiva
    5. labels:
    6. name: fio-jiva
    7. openebs.io/target-affinity: fio-jiva

    For OpenEBS PVC, it will be similar to the following.

    1. kind: PersistentVolumeClaim
    2. apiVersion: v1
    3. metadata:
    4. name: fio-jiva-claim
    5. labels:
    6. openebs.io/target-affinity: fio-jiva

Note: This feature works only for cases where there is a single application pod instance associated to a PVC. Example YAML spec for application deployment can be get from here. In the case of STS, this feature is supported only for single replica StatefulSet.

OpenEBS Namespace Policy for Jiva Pods

This StorageClass Policy is for deploying the Jiva pods in OpenEBS Namespace. By default, the value is false, so Jiva Pods will deploy in PVC namespace. The following are the main requirement of running Jiva pods in OpenEBS namespace.

  • With default value, granting additional privileges to Jiva pods to access hostpath might involve granting privileges to the entire namespace of PVC. With enabling this value astrue , Jiva pods will get additional privileges to access hostpath in OpenEBS namespace.

  • To avoid duplicate Jiva Pod creation during the restoration using Velero.

The following is a snippet of an StorageClass YAML spec for running Jiva pods in openebs namespace.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: jiva-pods-in-openebs-ns
  5. annotations:
  6. openebs.io/cas-type: jiva
  7. cas.openebs.io/config: |
  8. - name: DeployInOpenEBSNamespace
  9. enabled: "true"
  10. provisioner: openebs.io/provisioner-iscsi

See Also:

Understanding Jiva

Feedback

Was this page helpful?

YesNo

Thanks for the feedback. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement. Engage and get additional help on https://kubernetes.slack.com/messages/openebs/.