Ceph Operator Helm Chart

Installs rook to create, configure, and manage Ceph clusters on Kubernetes.

Introduction

This chart bootstraps a rook-ceph-operator deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.11+

RBAC

If role-based access control (RBAC) is enabled in your cluster, you may need to give Tiller (the server-side component of Helm) additional permissions. If RBAC is not enabled, be sure to set rbacEnable to false when installing the chart.

  1. # Create a ServiceAccount for Tiller in the `kube-system` namespace
  2. kubectl --namespace kube-system create sa tiller
  3. # Create a ClusterRoleBinding for Tiller
  4. kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
  5. # Patch Tiller's Deployment to use the new ServiceAccount
  6. kubectl --namespace kube-system patch deploy/tiller-deploy -p '{"spec": {"template": {"spec": {"serviceAccountName": "tiller"}}}}'

Installing

The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster. After the helm chart is installed, you will need to create a Rook cluster.

The helm install command deploys rook on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the rook-ceph namespace (you will install your clusters into separate namespaces).

Rook currently publishes builds of the Ceph operator to the release and master channels.

Release

The release channel is the most recent release of Rook that is considered stable for the community.

  1. helm repo add rook-release https://charts.rook.io/release
  2. helm install --namespace rook-ceph rook-release/rook-ceph

Master

The master channel includes the latest commits, with all automated tests green. Historically it has been very stable, though it is only recommended for testing. The critical point to consider is that upgrades are not supported to or from master builds.

To install the helm chart from master, you will need to pass the specific version returned by the search command.

  1. helm repo add rook-master https://charts.rook.io/master
  2. helm search rook-ceph
  3. helm install --namespace rook-ceph rook-master/rook-ceph --version <version>

For example:

  1. helm install --namespace rook-ceph rook-master/rook-ceph --version v0.7.0-278.gcbd9726

Development Build

To deploy from a local build from your development environment:

  1. Build the Rook docker image: make
  2. Copy the image to your K8s cluster, such as with the docker save then the docker load commands
  3. Install the helm chart:
  1. cd cluster/charts/rook-ceph
  2. helm install --namespace rook-ceph --name rook-ceph .

Uninstalling the Chart

To uninstall/delete the rook-ceph deployment:

  1. helm delete --purge rook-ceph

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following tables lists the configurable parameters of the rook-operator chart and their default values.

ParameterDescriptionDefault
image.repositoryImagerook/ceph
image.tagImage tagmaster
image.pullPolicyImage pull policyIfNotPresent
rbacEnableIf true, create & use RBAC resourcestrue
pspEnableIf true, create & use PSP resourcestrue
resourcesPod resource requests & limits{}
annotationsPod annotations{}
logLevelGlobal log levelINFO
nodeSelectorKubernetes nodeSelector to add to the Deployment.
tolerationsList of Kubernetes tolerations to add to the Deployment.[]
unreachableNodeTolerationSecondsDelay to use for the node.kubernetes.io/unreachable pod failure toleration to override the Kubernetes default of 5 minutes5s
currentNamespaceOnlyWhether the operator should watch cluster CRD in its own namespace or notfalse
hostpathRequiresPrivilegedRuns Ceph Pods as privileged to be able to write to hostPaths in OpenShift with SELinux restrictions.false
mon.healthCheckIntervalThe frequency for the operator to check the mon health45s
mon.monOutTimeoutThe time to wait before failing over an unhealthy mon600s
discover.priorityClassNameThe priority class name to add to the discover pods
discover.tolerationToleration for the discover pods
discover.tolerationKeyThe specific key of the taint to tolerate
discover.tolerationsArray of tolerations in YAML format which will be added to discover deployment
discover.nodeAffinityThe node labels for affinity of discover-agent ()
csi.enableRbdDriverEnable Ceph CSI RBD driver.true
csi.enableCephfsDriverEnable Ceph CSI CephFS driver.true
csi.pluginPriorityClassNamePriorityClassName to be set on csi driver plugin pods.
csi.provisionerPriorityClassNamePriorityClassName to be set on csi driver provisioner pods.
csi.logLevelSet logging level for csi containers. Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.0
csi.enableGrpcMetricsEnable Ceph CSI GRPC Metrics.true
csi.provisionerTolerationsArray of tolerations in YAML format which will be added to CSI provisioner deployment.
csi.provisionerNodeAffinityThe node labels for affinity of the CSI provisioner deployment ()
csi.pluginTolerationsArray of tolerations in YAML format which will be added to Ceph CSI plugin DaemonSet
csi.pluginNodeAffinityThe node labels for affinity of the Ceph CSI plugin DaemonSet ()
csi.csiRBDProvisionerResourceCEPH CSI RBD provisioner resource requirement list.
csi.csiRBDPluginResourceCEPH CSI RBD plugin resource requirement list.
csi.csiCephFSProvisionerResourceCEPH CSI CephFS provisioner resource requirement list.
csi.csiCephFSPluginResourceCEPH CSI CephFS plugin resource requirement list.
csi.cephfsGrpcMetricsPortCSI CephFS driver GRPC metrics port.9091
csi.cephfsLivenessMetricsPortCSI CephFS driver metrics port.9081
csi.rbdGrpcMetricsPortCeph CSI RBD driver GRPC metrics port.9090
csi.rbdLivenessMetricsPortCeph CSI RBD driver metrics port.8080
csi.forceCephFSKernelClientEnable Ceph Kernel clients on kernel < 4.17 which support quotas for Cephfs.true
csi.kubeletDirPathKubelet root directory path (if the Kubelet uses a different path for the —root-dir flag)/var/lib/kubelet
csi.cephcsi.imageCeph CSI image.quay.io/cephcsi/cephcsi:v3.1.0
csi.rbdPluginUpdateStrategyCSI Rbd plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.OnDelete
csi.cephFSPluginUpdateStrategyCSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.OnDelete
csi.registrar.imageKubernetes CSI registrar image.quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
csi.resizer.imageKubernetes CSI resizer image.quay.io/k8scsi/csi-resizer:v0.4.0
csi.provisioner.imageKubernetes CSI provisioner image.quay.io/k8scsi/csi-provisioner:v1.6.0
csi.snapshotter.imageKubernetes CSI snapshotter image.quay.io/k8scsi/csi-snapshotter:v2.1.1
csi.attacher.imageKubernetes CSI Attacher image.quay.io/k8scsi/csi-attacher:v2.1.0
agent.flexVolumeDirPathPath where the Rook agent discovers the flex volume plugins ()/usr/libexec/kubernetes/kubelet-plugins/volume/exec/
agent.libModulesDirPathPath where the Rook agent should look for kernel modules (*)/lib/modules
agent.mountsAdditional paths to be mounted in the agent container ()
agent.mountSecurityModeMount Security Mode for the agent.Any
agent.priorityClassNameThe priority class name to add to the agent pods
agent.tolerationToleration for the agent pods
agent.tolerationKeyThe specific key of the taint to tolerate
agent.tolerationsArray of tolerations in YAML format which will be added to agent deployment
agent.nodeAffinityThe node labels for affinity of rook-agent (*)

* For information on what to set agent.flexVolumeDirPath to, please refer to the Rook flexvolume documentation

* * agent.mounts should have this format mountname1=/host/path:/container/path,mountname2=/host/path2:/container/path2

* * * nodeAffinity and *NodeAffinity options should have the format "role=storage,rook; storage=ceph" or storage=;role=rook-example or storage=; (checks only for presence of key)

Command Line

You can pass the settings with helm command line parameters. Specify each parameter using the --set key=value[,key=value] argument to helm install. For example, the following command will install rook where RBAC is not enabled.

  1. helm install --namespace rook-ceph --name rook-ceph rook-release/rook-ceph --set rbacEnable=false

Settings File

Alternatively, a yaml file that specifies the values for the above parameters (values.yaml) can be provided while installing the chart.

  1. helm install --namespace rook-ceph --name rook-ceph rook-release/rook-ceph -f values.yaml

Here are the sample settings to get you started.

  1. image:
  2. prefix: rook
  3. repository: rook/ceph
  4. tag: master
  5. pullPolicy: IfNotPresent
  6. resources:
  7. limits:
  8. cpu: 100m
  9. memory: 256Mi
  10. requests:
  11. cpu: 100m
  12. memory: 256Mi
  13. rbacEnable: true
  14. pspEnable: true