OpenEBS for GitLab

OpenEBS and GitLab

Introduction

GitLab is a good solution for building On-Premise cloud native CI/CD platforms, it is a single application for the entire software development lifecycle. The helm charts for GitLab are made so simple that the entire infrastructure including the underlying databases and storage needed for GitLab are dynamically provisioned. This solution discusses the use cases of using OpenEBS from a single pool of storage for all the databases required to run GitLab.

Advantages of using OpenEBS for Gitlab:

  • OpenEBS acts a single storage platform for all stateful applications including Gitaly, Redis, PostgreSQL, Minio and Prometheus

  • OpenEBS volumes are highly available. Node loss, reboots and Kubernetes upgrades will not affect the availability of persistent storage to the stateful applications

  • Storage is scalable on demand. You can start with a small storage for all the databases required by GitLab and scale it on demand

Deployment model

GitLab deployment using OpenEBS

Configuration workflow

  1. Install OpenEBS

    If OpenEBS is not installed in your K8s cluster, this can done from here. If OpenEBS is already installed, go to the next step.

  2. Configure cStor Pool

    After OpenEBS installation, cStor pool has to be configured. If cStor Pool is not configured in your OpenEBS cluster, this can be done from here. During cStor Pool creation, make sure that the maxPools parameter is set to >=3. Sample YAML named openebs-config.yaml for configuring cStor Pool is provided in the Configuration details below. If cStor pool is already configured, go to the next step.

  3. Create Storage Class

    You must configure a StorageClass to provision cStor volume on given cStor pool. StorageClass is the interface through which most of the OpenEBS storage policies are defined. In this solution we are using a StorageClass to consume the cStor Pool which is created using external disks attached on the Nodes. Since GitLab is a StatefulSet application and it requires only single storage replication. So cStor volume replicaCount is =1. Sample YAML named openebs-sc-disk.yamlto consume cStor pool with cStor volume replica count as 1 is provided in the configuration details below.

  4. Launch and test GitLab

    Patch your StorageClass which is going to be used for the GitLab installation using the following command.

    1. kubectl patch storageclass openebs-cstor-disk -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

    Use stable Gitlab image with helm to deploy GitLab in your cluster using the following command. In the following command, it will create two PVCs such as 1Gi for storing generated configuration files, keys, and certs and 10Gi is used to store git data and other project files.

    1. helm repo add gitlab https://charts.gitlab.io/
    2. helm repo update
    3. helm upgrade --install gitlab gitlab/gitlab \
    4. --timeout 600 \
    5. --set global.hosts.domain=<domain_name>\
    6. --set global.hosts.externalIP=<GitLab_Service_IP> \

    For more information on installation, see GitLab documentation.

    Note: You may be required to add “fsGroup:1000” under “spec.template.spec.securityContext” in corresponding gitlab-prometheus-server deployment spec for writing metrics to it.

Post deployment Operations

Monitor OpenEBS Volume size

It is not seamless to increase the cStor volume size (refer to the roadmap item). Hence, it is recommended that sufficient size is allocated during the initial configuration.

Monitor cStor Pool size

As in most cases, cStor pool may not be dedicated to just GitLab’s databases alone. It is recommended to watch the pool capacity and add more disks to the pool before it hits 80% threshold. See cStorPool metrics.

Maintain volume replica quorum during node upgrades

cStor volume replicas need to be in quorum when applications are deployed as deployment and cStor volume is configured to have 3 replicas. Node reboots may be common during Kubernetes upgrade. Maintain volume replica quorum in such instances. See here for more details.

Configuration details

openebs-config.yaml

  1. #Use the following YAMLs to create a cStor Storage Pool.
  2. # and associated storage class.
  3. apiVersion: openebs.io/v1alpha1
  4. kind: StoragePoolClaim
  5. metadata:
  6. name: cstor-disk
  7. spec:
  8. name: cstor-disk
  9. type: disk
  10. poolSpec:
  11. poolType: striped
  12. # NOTE - Appropriate disks need to be fetched using `kubectl get blockdevices -n openebs`
  13. #
  14. # `Block devices` is a custom resource supported by OpenEBS with `node-disk-manager`
  15. # as the disk operator
  16. # Replace the following with actual disk CRs from your cluster `kubectl get blockdevices -n openebs`
  17. # Uncomment the below lines after updating the actual disk names.
  18. blockDevices:
  19. blockDeviceList:
  20. # Replace the following with actual disk CRs from your cluster from `kubectl get blockdevices -n openebs`
  21. # - blockdevice-69cdfd958dcce3025ed1ff02b936d9b4
  22. # - blockdevice-891ad1b581591ae6b54a36b5526550a2
  23. # - blockdevice-ceaab442d802ca6aae20c36d20859a0b
  24. ---

openebs-sc-disk.yaml

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: openebs-cstor-disk
  5. annotations:
  6. openebs.io/cas-type: cstor
  7. cas.openebs.io/config: |
  8. - name: StoragePoolClaim
  9. value: "cstor-disk"
  10. - name: ReplicaCount
  11. value: "1"
  12. provisioner: openebs.io/provisioner-iscsi
  13. reclaimPolicy: Delete
  14. ---

See Also:

OpenEBS architecture

OpenEBS use cases

cStor pools overview

Feedback

Was this page helpful?

YesNo

Thanks for the feedback. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement. Engage and get additional help on https://kubernetes.slack.com/messages/openebs/.