Installing OpenEBS

OpenEBS configuration flow

Summary:

Verify iSCSI client

iSCSI client is a pre-requisite for provisioning cStor and Jiva volumes. However, it is recommended that the iSCSI client is setup and iscsid service is running on worker nodes before proceeding with the OpenEBS installation.

Set cluster-admin user context and RBAC

For installation of OpenEBS, cluster-admin user context is a must.

If there is no cluster-admin user context already present, create one and use it. Use the following command to create the new context.

  1. kubectl config set-context NAME [--cluster=cluster_nickname] [--user=user_nickname] [--namespace=namespace]

Example:

  1. kubectl config set-context admin-ctx --cluster=gke_strong-eon-153112_us-central1-a_rocket-test2 --user=cluster-admin

Set the existing cluster-admin user context or the newly created context by using the following command.

Example:

  1. kubectl config use-context admin-ctx

If you are using GKE or any other cloud providers, you must enable RBAC before OpenEBS installation. This can be done from the kubernetes master console by executing the following command.

  1. kubectl create clusterrolebinding <cluster_name>-admin-binding --clusterrole=cluster-admin --user=<user-registered-email-with-the-provider>

Installation through helm

Verify helm is installed and helm repo is updated. See helm docs for setting up helm and simple instructions below for setting up RBAC for tiller.

In the default installation mode, use the following command to install OpenEBS in openebs namespace.

  1. helm install --namespace openebs --name openebs stable/openebs

Note: Since Kubernetes 1.12, if any containers does not set its resource requests & limits values, it results into eviction. It is recommend to set these values appropriately to OpenEBS pod spec in the operator YAML before installing OpenEBS. The example configuration can be get from here.

As a next step verify your installation and do the post installation steps.

In the custom installation mode, you can achieve the following advanced configurations

  • Choose a set of nodes for OpenEBS control plane pods.
  • Choose a set of nodes for OpenEBS storage pool.
  • You can customize the disk filters that need to be excluded from being used.
  • You can choose custom namespace other than default namespace openebs.

Follow the below instructions to do any of the above configurations and then install OpenEBS through helm and values.yaml

Setup nodeSelectors for OpenEBS control plane

In a large Kubernetes cluster, you may choose to limit the scheduling of the OpenEBS control plane pods to two or three specific nodes. To do this, use nodeSelector field of PodSpec of OpenEBS control plane pods - apiserver, volume provisioner,admission-controller and snapshot operator.

See the example here.

Setup nodeSelectors for Node Disk Manager (NDM)

OpenEBS cStorPool is constructed using the disk custom resources or disk CRs created by Node Disk Manager or NDM. If you want to consider only some nodes in Kubernetes cluster to be used for OpenEBS storage (for hosting cStor Storage Pool instances), then use nodeSelector field of NDM PodSpec and dedicate those nodes to NDM.

See an example here.

Setup disk filters for Node Disk Manager

NDM by default filters out the below disk patterns and converts the rest of the disks discovered on a given node into DISK CRs as long as they are not mounted.

  1. "exclude":"loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-"

If your cluster nodes have different disk types that are to be filtered out (meaning that those should not be created as DISK CRs ), add the additional disk patterns to the exclude list.

See an example configuration here

Other values.yaml parameters

For customized configuration through helm, use values.yaml or command line parameters.

Default values for Helm Chart parameters are provided below.

After doing the custom configuration in the values.yaml file, run the below command to do the custom installation.

  1. helm install --namespace <custom_namespace> --name openebs stable/openebs -f values.yaml

As a next step verify your installation and do the post installation steps.

Installation through kubectl

In the default installation mode, use the following command to install OpenEBS. OpenEBS is installed in openebs namespace.

  1. kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.0.0.yaml

Note: Since Kubernetes 1.12, if any pod containers does not set its resource requests & limits values, it results into eviction. It is recommend to set these values appropriately to OpenEBS pod spec in the operator YAML before installing OpenEBS. The example configuration can be get from here.

As a next step verify your installation and do the post installation steps.

In the custom installation mode, you can achieve the following advanced configurations.

  • Choose a set of nodes for OpenEBS control plane pods
  • Choose a set of nodes for OpenEBS storage pool
  • You can customize the disk filters that need to be excluded from being used
  • (Optional) Configure Environmental Variable in OpenEBS operator YAML

For custom installation, download the openebs-operator-1.0.0 file, update the above configurations using the instructions below and proceed to installation with kubectl command.

Setup nodeSelectors for OpenEBS control plane

In a large Kubernetes cluster, you may choose to limit the scheduling of the OpenEBS control plane pods to two or three specific nodes. To do this, specify a map of key-value pair and then attach the same key-value pair as labels to the required nodes on the cluster.

Example nodeSelector configuration for OpenEBS control plane components is given here.

Setup nodeSelectors for Admission Controller

The Admission controller to intercepts the requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized. This openebs admission controller implements additional custom admission policies to validate the incoming request. The following are the admission policies available with the latest main release.

  1. PersistentVolumeClaim delete requests validates if there is clone PersistentVolumeClaim exists.
  2. Clone PersistentVolumeClaim create requests validates requested claim capacity. This has to be equal to snapshot size.

The Admission Controller pod can be scheduled on particular node using nodeSelector method.

Example nodeSelector configuration for OpenEBS control plane components is given here.

Setup nodeSelectors for Node Disk Manager (NDM)

OpenEBS cStorPool is constructed using the block device custom resources or block device created by Node Disk Manager or NDM. If you want to consider only some nodes in Kubernetes cluster to be used for OpenEBS storage (for hosting cStor Storage Pool instances), then specify a map of key-value pair and then attach the same key-value pair as labels to the required nodes on the cluster.

Example nodeSelector configuration for OpenEBS control plane components is given here.

Setup disk filters for Node Disk Manager

NDM by default filters out the below disk patterns and converts the rest of the disks discovered on a given node into DISK CRs as long as they are not mounted.

"exclude":"loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-"

If your cluster nodes have different disk types that are to be filtered out (meaning that those should not be created as DISK CRs ), add the additional disk patterns to the exclude list in the yaml file.

See an example configuration here

Configure Environmental Variable

Some of the configurations related to cStor Target, default cStor sparse pool, Local PV Basepath, etc can be configured as environmental variable in the corresponding deployment specification.

SparseDir

SparseDir is a hostPath directory where to look for sparse files. The default value is “/var/openebs/sparse”.

The following configuration must added as environmental variable in the maya-apiserver deployment specification. This change must be done before applying the OpenEBS operator YAML file.

  1. # environment variable
  2. - name: SparseDir
  3. value: "/var/lib/"

Default cStorSparsePool

The OpenEBS installation will create default cStor sparse pool based on this configuration value. If “true”, default cStor sparse pools will be configured, if “false”, it will not be configure a default cStor sparse pool. The default configured value is “false”. The use of cStor sparse pool is for testing purposes only.

The following configuration must be added as environmental variable in the maya-apiserver deployment specification for the installation of cStor pool using sparse disks. This change must be done before applying the OpenEBS operator YAML file.

Example:

  1. # environment variable
  2. - name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
  3. value: "false"

TargetDir

Target Dir is a hostPath directory for target pod. The default value is “/var/openebs”. This value can override the existing host path introducing a OPENEBS_IO_CSTOR_TARGET_DIR ENV in maya-apiserver deployment. This configuration might required where underlying host OS does not have write permission on default OpenEBS path(/var/openebs/).

The following configuration must added as environmental variable in the maya-apiserver deployment specification. This change must be done before applying the OpenEBS operator YAML file.

Example:

  1. # environment variable
  2. - name: OPENEBS_IO_CSTOR_TARGET_DIR
  3. value: "/var/lib/overlay/openebs"

Basepath for OpenEBS Local PV

By default the hostpath is configured as /var/openebs/local for Local PV based on hostpath, which can be changed during the OpenEBS operator install by passing the OPENEBS_IO_BASE_PATH ENV parameter to the Local PV dynamic provisioner deployment.

  1. # environment variable
  2. - name: OPENEBS_IO_BASE_PATH
  3. value: "/mnt/"

After doing the custom configuration in the downloaded openebs-operator.yaml file, run the below command to do the custom installation.

  1. kubectl apply -f <custom-openebs-operator-1.0.0.yaml>

As a next step verify your installation and do the post installation steps.

Verifying OpenEBS installation

Verify pods:

List the pods in <openebs> name space

  1. kubectl get pods -n openebs

In the successful installation of OpenEBS, you should see an example output like below.

NAME READY STATUS RESTARTS AGE maya-apiserver-64b68fdb45-sxbwx 1/1 Running 0 4m22s openebs-admission-server-9b48bcf5f-l85rt 1/1 Running 0 4m16s openebs-localpv-provisioner-79c59bf5db-tkgln 1/1 Running 0 4m15s openebs-ndm-42446 1/1 Running 0 4m19s openebs-ndm-4s8x9 1/1 Running 0 4m19s openebs-ndm-knc9g 1/1 Running 0 4m19s openebs-ndm-operator-db4c77957-dgp4t 1/1 Running 0 4m18s openebs-provisioner-66f767bbf7-7t4vs 1/1 Running 0 4m21s openebs-snapshot-operator-656f6b7878-ghrgr 2/2 Running 0 4m20s

openebs-ndm is a daemonset, it should be running on all nodes or on the nodes that are selected through nodeSelector configuration.

The control plane pods openebs-provisioner, maya-apiserver and openebs-snapshot-operator should be running. If you have configured nodeSelectors , check if they are scheduled on the appropriate nodes by listing the pods through kubectl get pods -n openebs -o wide

Verify StorageClasses:

List the storage classes to check if OpenEBS has installed with default StorageClasses.

  1. kubectl get sc

In the successful installation, you should have the following StorageClasses are created.

NAME PROVISIONER AGE openebs-device openebs.io/local 4m24s openebs-hostpath openebs.io/local 4m24s openebs-jiva-default openebs.io/provisioner-iscsi 4m25s openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 4m25s standard (default) kubernetes.io/gce-pd 31m

Verify Block Device CRs

NDM daemonset creates a block device CR for each block devices that is discovered on the node with two exceptions

  • The disks that match the exclusions in ‘vendor-filter’ and ‘path-filter’
  • The disks that are already mounted in the node

List the block device CRs to verify the CRs are appearing as expected.

  1. kubectl get blockdevice -n openebs

Following is an example output.

NAME SIZE CLAIMSTATE STATUS AGE blockdevice-936911c5c9b0218ed59e64009cc83c8f 42949672960 Unclaimed Active 3m

To know which block device CR belongs to which node, check the node label set on the CR by doing the following command.

  1. kubectl describe blockdevice <blockdevice-cr>

Verify cStor Storage pool

  1. kubectl get csp

Following is an example output.

NAME ALLOCATED FREE CAPACITY STATUS TYPE AGE cstor-disk-4tfw 77K 39.7G 39.8G Healthy striped 42s

Verify Jiva default pool - default

  1. kubectl get sp

Following is an example output.

NAME AGE default 3h

Note that listing `sp` lists both `csp` and the `Jiva pool`.

Post-Installation considerations

For a simple testing of OpenEBS, you can use the below default storage classes

  • openebs-jiva-default for provisioning Jiva Volume (this uses default pool which means the data replicas are created in the /mnt/openebs_disk directory of the Jiva replica pod)

  • openebs-hostpath for provisioning Local PV on hostpath.

  • openebs-device for provisioning Local PV on device.

For using real disks, you have to create cStorPools or Jiva pools or OpenEBS Local PV based on the requirement and then create corresponding StorageClasses or use default StorageClasses to use them.

To monitor the OpenEBS volumes and obtain corresponding logs, connect to the free SaaS service Kubera. See connecting to Kubera.

Example configuration- Pod resource requests

All openebs components should have resource requests set against each of its pod containers. This should be added in the openebs operator YAML file before applying it. This setting is useful in cases where user has to specify minimum requests like ephemeral storage etc, to avoid erroneous eviction by K8s.

AuxResourceRequests

This setting is useful in cases where user has to specify minimum requests like ephemeral storage etc. to avoid erroneous eviction by K8s. AuxResourceRequests allow you to set requests on side cars.

  1. - name: AuxResourceRequests
  2. value: |-
  3. memory: 0.5Gi
  4. cpu: 100m

Example configurations - helm

Setup RBAC for Tiller before Installing OpenEBS Chart

  1. kubectl -n kube-system create sa tiller
  2. kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
  3. kubectl -n kube-system patch deploy/tiller-deploy -p '{"spec": {"template": {"spec": {"serviceAccountName": "tiller"}}}}'
  4. kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'

Ensure that helm repo in your master node is updated to get the latest OpenEBS repository using the following command

  1. helm repo update

For nodeSelectors in values.yaml (helm)

First, label the required nodes with an appropriate label. In the following command, the required nodes for storage nodes are labelled as node=openebs

  1. kubectl label nodes <node-name> node=openebs

Find apiServer, provisioner, snapshotOperator, admission-server and ndm sections in values.yamland update nodeSelector key. Example of the updated provisioner section in the following snippet where node:openebs is the nodeSelector label.

  1. provisioner:
  2. image: "quay.io/openebs/openebs-k8s-provisioner"
  3. imageTag: "1.0.0"
  4. replicas: 1
  5. nodeSelector:
  6. node: openebs
  7. tolerations: []
  8. affinity: {}

For disk filters in values.yaml (helm)

In the values.yaml, findndm section to update excludeVendors: and excludePaths:

  1. ndm:
  2. image: "quay.io/openebs/node-disk-manager-amd64"
  3. imageTag: "v0.4.0"
  4. sparse:
  5. enabled: "true"
  6. path: "/var/openebs/sparse"
  7. size: "10737418240"
  8. count: "1"
  9. filters:
  10. excludeVendors: "CLOUDBYT,OpenEBS"
  11. excludePaths: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
  12. nodeSelector: {}

Default Values for Helm Chart Parameters

Download the values.yaml from here and update them as per your needs. The configurable parameters are described here for reading convenience.

PARAMETERDESCRIPTIONDEFAULT
rbac.createEnable RBAC Resourcestrue
image.pullPolicyContainer pull policyIfNotPresent
apiserver.imageDocker Image for API Serverquay.io/openebs/m-apiserver
apiserver.imageTagDocker Image Tag for API Server1.0.0
apiserver.replicasNumber of API Server Replicas1
apiserver.sparse.enabledCreate Sparse Pool based on Sparsefilefalse
provisioner.imageDocker Image for Provisionerquay.io/openebs/openebs-k8s-provisioner
provisioner.imageTagDocker Image Tag for Provisioner1.0.0
provisioner.replicasNumber of Provisioner Replicas1
localProvisioner.imageImage for localProvisionerquay.io/openebs/provisioner-localpv
localProvisioner.imageTagImage Tag for localProvisioner1.0.0
localProvisioner.replicasNumber of localProvisioner Replicas1
localProvisioner.basePathBasePath for hostPath volumes on Nodes/var/openebs/local
webhook.imageImage for admision serverquay.io/openebs/admission-server
webhook.imageTagImage Tag for admission server1.0.0
webhook.replicasNumber of admission server Replicas1
snapshotOperator.provisioner.imageDocker Image for Snapshot Provisionerquay.io/openebs/snapshot-provisioner
snapshotOperator.provisioner.imageTagDocker Image Tag for Snapshot Provisioner1.0.0
snapshotOperator.controller.imageDocker Image for Snapshot Controllerquay.io/openebs/snapshot-controller
snapshotOperator.controller.imageTagDocker Image Tag for Snapshot Controller1.0.0
snapshotOperator.replicasNumber of Snapshot Operator Replicas1
ndm.imageDocker Image for Node Disk Managerquay.io/openebs/node-disk-manager-amd64
ndm.imageTagDocker Image Tag for Node Disk Managerv0.4.0
ndm.sparse.pathDirectory where Sparse files are created/var/openebs/sparse
ndm.sparse.sizeSize of the sparse file in bytes10737418240
ndm.sparse.countNumber of sparse files to be created1
ndm.filters.excludeVendorsExclude devices with specified vendorCLOUDBYT,OpenEBS
ndm.filters.excludePathsExclude devices with specified path patternsloop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md
ndmOperator.imageImage for NDM Operatorquay.io/openebs/node-disk-operator-amd64
ndmOperator.imageTagImage Tag for NDM Operatorv0.4.0
jiva.imageDocker Image for Jivaquay.io/openebs/jiva
jiva.imageTagDocker Image Tag for Jiva1.0.0
jiva.replicasNumber of Jiva Replicas3
cstor.pool.imageDocker Image for cStor Poolquay.io/openebs/cstor-pool
cstor.pool.imageTagDocker Image Tag for cStor Pool1.0.0
cstor.poolMgmt.imageDocker Image for cStor Pool Managementquay.io/openebs/cstor-pool-mgmt
cstor.poolMgmt.imageTagDocker Image Tag for cStor Pool Management1.0.0
cstor.target.imageDocker Image for cStor Targetquay.io/openebs/cstor-istgt
cstor.target.imageTagDocker Image Tag for cStor Target1.0.0
cstor.volumeMgmt.imageDocker Image for cStor Volume Managementquay.io/openebs/cstor-volume-mgmt
cstor.volumeMgmt.imageTagDocker Image Tag for cStor Volume Management1.0.0
policies.monitoring.imageDocker Image for Prometheus Exporterquay.io/openebs/m-exporter
policies.monitoring.imageTagDocker Image Tag for Prometheus Exporter1.0.0
analytics.enabledEnable sending stats to Google Analyticstrue
analytics.pingIntervalDuration(hours) between sending ping stat24h
HealthCheck.initialDelaySecondsDelay before liveness probe is initiated30
HealthCheck.periodSecondsHow often to perform the liveness probe60

Example configurations - kubectl

For nodeSelectors in openebs-operator.yaml

First, label the required nodes with an appropriate label. In the following command, the required nodes for storage nodes are labelled as node=openebs.

  1. kubectl label nodes <node-name> node=openebs

Next, in the downloaded openebs-operator.yaml, find the PodSpec for openebs-provisioner, maya-apiserver, openebs-snapshot-operator, openebs-admission-server and openebs-ndm pods and add the following key-value pair under nodeSelector field

  1. nodeSelector:
  2. node: openebs

For disk filters in openebs-operator.yaml

In the downloaded openebs-operator.yaml, find openebs-ndm-config configmap and update the values for keys path-filter and vendor-filter

  1. ---
  2. # This is the node-disk-manager related config.
  3. # It can be used to customize the disks probes and filters
  4. apiVersion: v1
  5. kind: ConfigMap
  6. metadata:
  7. name: openebs-ndm-config
  8. namespace: openebs
  9. data:
  10. # udev-probe is default or primary probe which should be enabled to run ndm
  11. # filterconfigs contails configs of filters - in their form fo include
  12. # and exclude comma separated strings
  13. node-disk-manager.config: |
  14. probeconfigs:
  15. - key: udev-probe
  16. name: udev probe
  17. state: true
  18. - key: seachest-probe
  19. name: seachest probe
  20. state: true
  21. - key: smart-probe
  22. name: smart probe
  23. state: true
  24. filterconfigs:
  25. - key: os-disk-exclude-filter
  26. name: os disk exclude filter
  27. state: true
  28. exclude: "/,/etc/hosts,/boot"
  29. - key: vendor-filter
  30. name: vendor filter
  31. state: true
  32. include: ""
  33. exclude: "CLOUDBYT,OpenEBS"
  34. - key: path-filter
  35. name: path filter
  36. state: true
  37. include: ""
  38. exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
  39. ---

See Also:

OpenEBS Architecture

Installation troubleshooting

OpenEBS use cases