OpenEBS for Cassandra

OpenEBS and Prometheus

Introduction

Apache Cassandra is a distributed NoSQL database management system designed to handle large amounts of data across nodes, providing high availability with no single point of failure. It uses asynchronous masterless replication allowing low latency operations for all clients. Cassandra is deployed usually as a statefulset on Kubernetes and requires persistent storage for each instance of Cassandra. OpenEBS provides persistent volumes on the fly when Cassandra instances are scaled up.

Advantages of using OpenEBS for Cassandra database:

  • No need to manage the local disks, they are managed by OpenEBS
  • Large size PVs can be provisioned by OpenEBS and Cassandra
  • Start with small storage and add disks as needed on the fly. Sometimes Cassandra instances are scaled up because of capacity on the nodes. With OpenEBS persistent volumes, capacity can be thin provisioned and disks can be added to OpenEBS on the fly without disruption of service
  • Cassandra sometimes need highly available storage, in such cases OpenEBS volumes can be configured with 3 replicas.
  • If required, take backup of the Cassandra data periodically and back them up to S3 or any object storage so that restoration of the same data is possible to the same or any other Kubernetes cluster

Note: Cassandra can be deployed both as deployment or as statefulset. When Cassandra deployed as statefulset, you don’t need to replicate the data again at OpenEBS level. When Cassandra is deployed as deployment, consider 3 OpenEBS replicas, choose the StorageClass accordingly.

Deployment model

OpenEBS and ElasticSearch

As shown above, OpenEBS volumes need to be configured with three replicas for high availability. This configuration work fine when the nodes (hence the cStor pool) is deployed across Kubernetes zones.

Configuration workflow

  1. Install OpenEBS

    If OpenEBS is not installed in your K8s cluster, this can done from here. If OpenEBS is already installed, go to the next step.

  2. Connect to Kubera (Optional) : Connecting the Kubernetes cluster to Kubera provides good visibility of storage resources. Kubera has various support options for enterprise customers.

  3. Configure cStor Pool

    After OpenEBS installation, cStor pool has to be configured.If cStor Pool is not configure in your OpenEBS cluster, this can be done from here. During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named openebs-config.yaml for configuring cStor Pool is provided in the Configuration details below. . If cStor pool is already configured, go to the next step.

  4. Create Storage Class

    You must configure a StorageClass to provision cStor volume on given cStor pool. StorageClass is the interface through which most of the OpenEBS storage policies are defined. In this solution we are using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. Since Cassandra is a StaefulSet application, it requires only one replication at the storage level. So cStor volume replicaCount is 1. Sample YAML named openebs-sc-disk.yamlto consume cStor pool with cStorVolume Replica count as 1 is provided in the configuration details below.

  5. Launch and test Cassandra

    Create a sample cassandra-statefulset.yaml file in the Configuration details section. This can be applied to deploy Cassandra database with OpenEBS. Run kubectl apply -f cassandra-statefulset.yaml to see Cassandra running. This will configure required PVC also.

    In other way , you can use Cassandra image with helm to deploy Cassandra in your cluster using the following command.

    1. helm install --namespace "cassandra" -n "cassandra" --storage-class=openebs-cstor-disk incubator/cassandra

Post deployment Operations

Monitor OpenEBS Volume size

It is not seamless to increase the cStor volume size (refer to the roadmap item). Hence, it is recommended that sufficient size is allocated during the initial configuration. However, an alert can be setup for volume size threshold using Kubera.

Monitor cStor Pool size

As in most cases, cStor pool may not be dedicated to just Cassandra database alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See cStorPool metrics.

Configuration details

openebs-config.yaml

  1. #Use the following YAMLs to create a cStor Storage Pool.
  2. # and associated storage class.
  3. apiVersion: openebs.io/v1alpha1
  4. kind: StoragePoolClaim
  5. metadata:
  6. name: cstor-disk
  7. spec:
  8. name: cstor-disk
  9. type: disk
  10. poolSpec:
  11. poolType: striped
  12. # NOTE - Appropriate disks need to be fetched using `kubectl get disks`
  13. #
  14. # `Disk` is a custom resource supported by OpenEBS with `node-disk-manager`
  15. # as the disk operator
  16. # Replace the following with actual disk CRs from your cluster `kubectl get disks`
  17. # Uncomment the below lines after updating the actual disk names.
  18. disks:
  19. diskList:
  20. # Replace the following with actual disk CRs from your cluster from `kubectl get disks`
  21. # - disk-184d99015253054c48c4aa3f17d137b1
  22. # - disk-2f6bced7ba9b2be230ca5138fd0b07f1
  23. # - disk-806d3e77dd2e38f188fdaf9c46020bdc
  24. # - disk-8b6fb58d0c4e0ff3ed74a5183556424d
  25. # - disk-bad1863742ce905e67978d082a721d61
  26. # - disk-d172a48ad8b0fb536b9984609b7ee653
  27. ---

openebs-sc-disk.yaml

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-cstor-disk
  5. annotations:
  6. openebs.io/cas-type: cstor
  7. cas.openebs.io/config: |
  8. - name: StoragePoolClaim
  9. value: "cstor-disk"
  10. - name: ReplicaCount
  11. value: "1"
  12. provisioner: openebs.io/provisioner-iscsi
  13. reclaimPolicy: Delete
  14. ---

cassandra-statefulset.yaml

  1. apiVersion: apps/v1beta1
  2. kind: StatefulSet
  3. metadata:
  4. name: cassandra
  5. labels:
  6. app: cassandra
  7. spec:
  8. serviceName: cassandra
  9. replicas: 3
  10. selector:
  11. matchLabels:
  12. app: cassandra
  13. template:
  14. metadata:
  15. labels:
  16. app: cassandra
  17. spec:
  18. containers:
  19. - name: cassandra
  20. image: gcr.io/google-samples/cassandra:v11
  21. imagePullPolicy: Always
  22. ports:
  23. - containerPort: 7000
  24. name: intra-node
  25. - containerPort: 7001
  26. name: tls-intra-node
  27. - containerPort: 7199
  28. name: jmx
  29. - containerPort: 9042
  30. name: cql
  31. resources:
  32. limits:
  33. cpu: "500m"
  34. memory: 1Gi
  35. requests:
  36. cpu: "500m"
  37. memory: 1Gi
  38. securityContext:
  39. capabilities:
  40. add:
  41. - IPC_LOCK
  42. lifecycle:
  43. preStop:
  44. exec:
  45. command: ["/bin/sh", "-c", "PID=$(pidof java) && kill $PID && while ps -p $PID > /dev/null; do sleep 1; done"]
  46. env:
  47. - name: MAX_HEAP_SIZE
  48. value: 512M
  49. - name: HEAP_NEWSIZE
  50. value: 100M
  51. - name: CASSANDRA_SEEDS
  52. value: "cassandra-0.cassandra.default.svc.cluster.local"
  53. - name: CASSANDRA_CLUSTER_NAME
  54. value: "K8Demo"
  55. - name: CASSANDRA_DC
  56. value: "DC1-K8Demo"
  57. - name: CASSANDRA_RACK
  58. value: "Rack1-K8Demo"
  59. - name: CASSANDRA_AUTO_BOOTSTRAP
  60. value: "false"
  61. - name: POD_IP
  62. valueFrom:
  63. fieldRef:
  64. fieldPath: status.podIP
  65. readinessProbe:
  66. exec:
  67. command:
  68. - /bin/bash
  69. - -c
  70. - /ready-probe.sh
  71. initialDelaySeconds: 15
  72. timeoutSeconds: 5
  73. # These volume mounts are persistent. They are like inline claims,
  74. # but not exactly because the names need to match exactly one of
  75. # the stateful pod volumes.
  76. volumeMounts:
  77. - name: cassandra-data
  78. mountPath: /cassandra_data
  79. volumeClaimTemplates:
  80. - metadata:
  81. name: cassandra-data
  82. annotations:
  83. volume.beta.kubernetes.io/storage-class: openebs-cstor-disk
  84. spec:
  85. accessModes: [ "ReadWriteOnce" ]
  86. resources:
  87. requests:
  88. storage: 5G

See Also:

OpenEBS architecture

OpenEBS use cases

cStor pools overview