Switching an Integrated OpenShift Container Registry to GlusterFS

Overview

This topic reviews how to attach a GlusterFS volume to an integrated OpenShift Container Registry. This can be done with any of Containerized GlusterFS, External GlusterFS, or standalone GlusterFS. It is assumed that the registry has already been started and a volume has been created.

Prerequisites

  • An existing registry deployed without configuring storage.

  • An existing GlusterFS volume

  • glusterfs-fuse installed on all schedulable nodes.

  • A user with the cluster-admin role binding.

    • For this guide, that user is admin.

All oc commands are executed on the master node as the admin user.

Manually Provision the GlusterFS PersistentVolumeClaim

  1. To enable static provisioning, first create a GlusterFS volume. See the GlusterFS Administration Guide for information on how to do this using the gluster command-line interface or the heketi project site for information on how to do this using heketi-cli. For this example, the volume will be named myVol1.

  2. Define the following Service and Endpoints in gluster-endpoints.yaml:

    1. ---
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: glusterfs-cluster (1)
    6. spec:
    7. ports:
    8. - port: 1
    9. ---
    10. apiVersion: v1
    11. kind: Endpoints
    12. metadata:
    13. name: glusterfs-cluster (1)
    14. subsets:
    15. - addresses:
    16. - ip: 192.168.122.221 (2)
    17. ports:
    18. - port: 1 (3)
    19. - addresses:
    20. - ip: 192.168.122.222 (2)
    21. ports:
    22. - port: 1 (3)
    23. - addresses:
    24. - ip: 192.168.122.223 (2)
    25. ports:
    26. - port: 1 (3)
    1These names must match.
    2The ip values must be the actual IP addresses of a GlusterFS server, not hostnames.
    3The port number is ignored.
  3. From the OKD master host, create the Service and Endpoints:

    1. $ oc create -f gluster-endpoints.yaml
    2. service "glusterfs-cluster" created
    3. endpoints "glusterfs-cluster" created
  4. Verify that the Service and Endpoints were created:

    1. $ oc get services
    2. NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
    3. glusterfs-cluster 172.30.205.34 <none> 1/TCP <none> 44s
    4. $ oc get endpoints
    5. NAME ENDPOINTS AGE
    6. docker-registry 10.1.0.3:5000 4h
    7. glusterfs-cluster 192.168.122.221:1,192.168.122.222:1,192.168.122.223:1 11s
    8. kubernetes 172.16.35.3:8443 4d

    Endpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints.

  5. In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:

    1. $ mkdir -p /mnt/glusterfs/myVol1
    2. $ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1
    3. $ ls -lnZ /mnt/glusterfs/
    4. drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1 (1) (2)
    1The UID is 592.
    2The GID is 590.
  6. Define the following PersistentVolume (PV) in gluster-pv.yaml:

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: gluster-default-volume (1)
    5. annotations:
    6. pv.beta.kubernetes.io/gid: "590" (2)
    7. spec:
    8. capacity:
    9. storage: 2Gi (3)
    10. accessModes: (4)
    11. - ReadWriteMany
    12. glusterfs:
    13. endpoints: glusterfs-cluster (5)
    14. path: myVol1 (6)
    15. readOnly: false
    16. persistentVolumeReclaimPolicy: Retain
    1The name of the volume.
    2The GID on the root of the GlusterFS volume.
    3The amount of storage allocated to this volume.
    4accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
    5The Endpoints resource previously created.
    6The GlusterFS volume that will be accessed.
  7. From the OKD master host, create the PV:

    1. $ oc create -f gluster-pv.yaml
  8. Verify that the PV was created:

    1. $ oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. gluster-default-volume <none> 2147483648 RWX Available 2s
  9. Create a PersistentVolumeClaim (PVC) that will bind to the new PV in gluster-claim.yaml:

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. metadata:
    4. name: gluster-claim (1)
    5. spec:
    6. accessModes:
    7. - ReadWriteMany (2)
    8. resources:
    9. requests:
    10. storage: 1Gi (3)
    1The claim name is referenced by the pod under its volumes section.
    2Must match the accessModes of the PV.
    3This claim will look for PVs offering 1Gi or greater capacity.
  10. From the OKD master host, create the PVC:

    1. $ oc create -f gluster-claim.yaml
  11. Verify that the PV and PVC are bound:

    1. $ oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. gluster-pv <none> 1Gi RWX Available gluster-claim 37s
    4. $ oc get pvc
    5. NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
    6. gluster-claim <none> Bound gluster-pv 1Gi RWX 24s

PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV.

Attach the PersistentVolumeClaim to the Registry

Before moving forward, ensure that the docker-registry service is running.

  1. $ oc get svc
  2. NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
  3. docker-registry 172.30.167.194 <none> 5000/TCP docker-registry=default 18m

If either the docker-registry service or its associated pod is not running, refer back to the registry setup instructions for troubleshooting before continuing.

Then, attach the PVC:

  1. $ oc set volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc \
  2. --claim-name=gluster-claim --overwrite

Setting up the Registry provides more information on using an OpenShift Container Registry.