OpenEBS for Percona

OpenEBS and Percona

Introduction

Percona is highly scalable and requires underlying persistent storage to be equally scalable and performing. OpenEBS provides scalable storage for Percona for providing a simple and scalable RDS like solution for both On-Premise and cloud environments.

Advantages of using OpenEBS for Percona database:

  • Storage is highly available. Data is replicated on to three different nodes, even across zones. Node upgrades, node failures will not result in unavailability of persistent data.
  • For each database instance of Percona, a dedicated OpenEBS workload is allocated so that granular storage policies can be applied. OpenEBS storage controller can be tuned with resources such as memory, CPU and number/type of disks for optimal performance.

Deployment model

OpenEBS and Percona

As shown above, OpenEBS volumes need to be configured with three replicas for high availability. This configuration works fine when the nodes (hence the cStor pool) is deployed across Kubernetes zones.

Configuration workflow

  1. Install OpenEBS

    If OpenEBS is not installed in your K8s cluster, this can done from here. If OpenEBS is already installed, go to the next step.

  2. Configure cStor Pool

    If cStor Pool is not configured in your OpenEBS cluster, this can be done from here. If cStor pool is already configured, go to the next step. Sample YAML named openebs-config.yaml for configuring cStor Pool is provided in the Configuration details below.

  3. Create Storage Class

    You must configure a StorageClass to provision cStor volume on cStor pool. StorageClass is the interface through which most of the OpenEBS storage policies are defined. In this solution we using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. Since Percona-MySQL is a deployment, it requires high availability of data at Storage level. So cStor volume replicaCount is 3. Sample YAML named openebs-sc-disk.yamlto consume cStor pool with cStor volume replica count as 3 is provided in the configuration details below.

  4. Launch and test Percona:

    Create a file called percona-openebs-deployment.yaml and add content from percona-openebs-deployment.yaml given in the configuration details section. Run kubectl apply -f percona-openebs-deployment.yaml to deploy Percona application. For more information, see Percona documentation. In other way, you can use stable Percona image with helm to deploy Percona in your cluster using the following command.

    1. helm install --name my-release --set persistence.enabled=true,persistence.storageClass=openebs-cstor-disk stable/percona

Reference at openebs.ci

A sample Percona server at https://openebs.ci

Sample YAML for running Percona-mysql using cStor are here

OpenEBS-CI dashboard of Percona

Post deployment Operations

Monitor OpenEBS Volume size

It is not seamless to increase the cStor volume size (refer to the roadmap item). Hence, it is recommended that sufficient size is allocated during the initial configuration.

Monitor cStor Pool size

As in most cases, cStor pool may not be dedicated to just Percona database alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See cStorPool metrics.

Maintain volume replica quorum during node upgrades

cStor volume replicas need to be in quorum when applications are deployed as deployment and cStor volume is configured to have 3 replicas. Node reboots may be common during Kubernetes upgrade. Maintain volume replica quorum in such instances. See here for more details.

Configuration details

openebs-config.yaml

  1. #Use the following YAMLs to create a cStor Storage Pool.
  2. # and associated storage class.
  3. apiVersion: openebs.io/v1alpha1
  4. kind: StoragePoolClaim
  5. metadata:
  6. name: cstor-disk
  7. spec:
  8. name: cstor-disk
  9. type: disk
  10. poolSpec:
  11. poolType: striped
  12. # NOTE - Appropriate disks need to be fetched using `kubectl get blockdevices -n openebs`
  13. #
  14. # `Block devices` is a custom resource supported by OpenEBS with `node-disk-manager`
  15. # as the disk operator
  16. # Replace the following with actual disk CRs from your cluster `kubectl get blockdevices -n openebs`
  17. # Uncomment the below lines after updating the actual disk names.
  18. blockDevices:
  19. blockDeviceList:
  20. # Replace the following with actual disk CRs from your cluster from `kubectl get blockdevices -n openebs`
  21. # - blockdevice-69cdfd958dcce3025ed1ff02b936d9b4
  22. # - blockdevice-891ad1b581591ae6b54a36b5526550a2
  23. # - blockdevice-ceaab442d802ca6aae20c36d20859a0b
  24. ---

openebs-sc-disk.yaml

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-cstor-disk
  5. annotations:
  6. openebs.io/cas-type: cstor
  7. cas.openebs.io/config: |
  8. - name: StoragePoolClaim
  9. value: "cstor-disk"
  10. - name: ReplicaCount
  11. value: "3"
  12. provisioner: openebs.io/provisioner-iscsi
  13. reclaimPolicy: Delete
  14. ---

percona-openebs-deployment.yaml

  1. ---
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: percona
  6. labels:
  7. name: percona
  8. spec:
  9. securityContext:
  10. fsGroup: 999
  11. containers:
  12. - resources:
  13. limits:
  14. cpu: 0.5
  15. name: percona
  16. image: percona
  17. args:
  18. - "--ignore-db-dir"
  19. - "lost+found"
  20. env:
  21. - name: MYSQL_ROOT_PASSWORD
  22. value: k8sDem0
  23. ports:
  24. - containerPort: 3306
  25. name: percona
  26. volumeMounts:
  27. - mountPath: /var/lib/mysql
  28. name: demo-vol1
  29. volumes:
  30. - name: demo-vol1
  31. persistentVolumeClaim:
  32. claimName: demo-vol1-claim
  33. ---
  34. kind: PersistentVolumeClaim
  35. apiVersion: v1
  36. metadata:
  37. name: demo-vol1-claim
  38. spec:
  39. storageClassName: openebs-cstor-disk
  40. accessModes:
  41. - ReadWriteOnce
  42. resources:
  43. requests:
  44. storage: 30G

See Also:

OpenEBS architecture

OpenEBS use cases

cStor pools overview