Complete Example Using GlusterFS

Overview

This topic provides an end-to-end example of how to use an existing Containerized GlusterFS, External GlusterFS, or standalone GlusterFS cluster as persistent storage for OKD. It is assumed that a working GlusterFS cluster is already set up. For help installing Containerized GlusterFS or External GlusterFS, see Persistent Storage Using GlusterFS.

For an end-to-end example of how to dynamically provision GlusterFS volumes, see Complete Example Using GlusterFS for Dynamic Provisioning.

All oc commands are executed on the OKD master host.

Prerequisites

To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:

  1. # yum install glusterfs-fuse

If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:

  1. # yum update glusterfs-fuse

By default, SELinux does not allow writing from a pod to a remote GlusterFS server. To enable writing to GlusterFS volumes with SELinux on, run the following on each node running GlusterFS:

  1. $ sudo setsebool -P virt_sandbox_use_fusefs on (1)
  2. $ sudo setsebool -P virt_use_fusefs on
1The -P option makes the boolean persistent between reboots.

The virt_sandbox_use_fusefs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.

If you use Atomic Host, the SELinux booleans are cleared when you upgrade Atomic Host. When you upgrade Atomic Host, you must set these boolean values again.

Static Provisioning

  1. To enable static provisioning, first create a GlusterFS volume. See the GlusterFS Administration Guide for information on how to do this using the gluster command-line interface or the heketi project site for information on how to do this using heketi-cli. For this example, the volume will be named myVol1.

  2. Define the following Service and Endpoints in gluster-endpoints.yaml:

    1. ---
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: glusterfs-cluster (1)
    6. spec:
    7. ports:
    8. - port: 1
    9. ---
    10. apiVersion: v1
    11. kind: Endpoints
    12. metadata:
    13. name: glusterfs-cluster (1)
    14. subsets:
    15. - addresses:
    16. - ip: 192.168.122.221 (2)
    17. ports:
    18. - port: 1 (3)
    19. - addresses:
    20. - ip: 192.168.122.222 (2)
    21. ports:
    22. - port: 1 (3)
    23. - addresses:
    24. - ip: 192.168.122.223 (2)
    25. ports:
    26. - port: 1 (3)
    1These names must match.
    2The ip values must be the actual IP addresses of a GlusterFS server, not hostnames.
    3The port number is ignored.
  3. From the OKD master host, create the Service and Endpoints:

    1. $ oc create -f gluster-endpoints.yaml
    2. service "glusterfs-cluster" created
    3. endpoints "glusterfs-cluster" created
  4. Verify that the Service and Endpoints were created:

    1. $ oc get services
    2. NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
    3. glusterfs-cluster 172.30.205.34 <none> 1/TCP <none> 44s
    4. $ oc get endpoints
    5. NAME ENDPOINTS AGE
    6. docker-registry 10.1.0.3:5000 4h
    7. glusterfs-cluster 192.168.122.221:1,192.168.122.222:1,192.168.122.223:1 11s
    8. kubernetes 172.16.35.3:8443 4d

    Endpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints.

  5. In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:

    1. $ mkdir -p /mnt/glusterfs/myVol1
    2. $ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1
    3. $ ls -lnZ /mnt/glusterfs/
    4. drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1 (1) (2)
    1The UID is 592.
    2The GID is 590.
  6. Define the following PersistentVolume (PV) in gluster-pv.yaml:

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: gluster-default-volume (1)
    5. annotations:
    6. pv.beta.kubernetes.io/gid: "590" (2)
    7. spec:
    8. capacity:
    9. storage: 2Gi (3)
    10. accessModes: (4)
    11. - ReadWriteMany
    12. glusterfs:
    13. endpoints: glusterfs-cluster (5)
    14. path: myVol1 (6)
    15. readOnly: false
    16. persistentVolumeReclaimPolicy: Retain
    1The name of the volume.
    2The GID on the root of the GlusterFS volume.
    3The amount of storage allocated to this volume.
    4accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
    5The Endpoints resource previously created.
    6The GlusterFS volume that will be accessed.
  7. From the OKD master host, create the PV:

    1. $ oc create -f gluster-pv.yaml
  8. Verify that the PV was created:

    1. $ oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. gluster-default-volume <none> 2147483648 RWX Available 2s
  9. Create a PersistentVolumeClaim (PVC) that will bind to the new PV in gluster-claim.yaml:

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. metadata:
    4. name: gluster-claim (1)
    5. spec:
    6. accessModes:
    7. - ReadWriteMany (2)
    8. resources:
    9. requests:
    10. storage: 1Gi (3)
    1The claim name is referenced by the pod under its volumes section.
    2Must match the accessModes of the PV.
    3This claim will look for PVs offering 1Gi or greater capacity.
  10. From the OKD master host, create the PVC:

    1. $ oc create -f gluster-claim.yaml
  11. Verify that the PV and PVC are bound:

    1. $ oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. gluster-pv <none> 1Gi RWX Available gluster-claim 37s
    4. $ oc get pvc
    5. NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
    6. gluster-claim <none> Bound gluster-pv 1Gi RWX 24s

PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV.

Using the Storage

At this point, you have a dynamically created GlusterFS volume bound to a PVC. You can now utilize this PVC in a pod.

  1. Create the pod object definition:

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: hello-openshift-pod
    5. labels:
    6. name: hello-openshift-pod
    7. spec:
    8. containers:
    9. - name: hello-openshift-pod
    10. image: openshift/hello-openshift
    11. ports:
    12. - name: web
    13. containerPort: 80
    14. volumeMounts:
    15. - name: gluster-vol1
    16. mountPath: /usr/share/nginx/html
    17. readOnly: false
    18. volumes:
    19. - name: gluster-vol1
    20. persistentVolumeClaim:
    21. claimName: gluster1 (1)
    1The name of the PVC created in the previous step.
  2. From the OKD master host, create the pod:

    1. # oc create -f hello-openshift-pod.yaml
    2. pod "hello-openshift-pod" created
  3. View the pod. Give it a few minutes, as it might need to download the image if it does not already exist:

    1. # oc get pods -o wide
    2. NAME READY STATUS RESTARTS AGE IP NODE
    3. hello-openshift-pod 1/1 Running 0 9m 10.38.0.0 node1
  4. oc exec into the container and create an index.html file in the mountPath definition of the pod:

    1. $ oc exec -ti hello-openshift-pod /bin/sh
    2. $ cd /usr/share/nginx/html
    3. $ echo 'Hello OpenShift!!!' > index.html
    4. $ ls
    5. index.html
    6. $ exit
  5. Now curl the URL of the pod:

    1. # curl http://10.38.0.0
    2. Hello OpenShift!!!
  6. Delete the pod, recreate it, and wait for it to come up:

    1. # oc delete pod hello-openshift-pod
    2. pod "hello-openshift-pod" deleted
    3. # oc create -f hello-openshift-pod.yaml
    4. pod "hello-openshift-pod" created
    5. # oc get pods -o wide
    6. NAME READY STATUS RESTARTS AGE IP NODE
    7. hello-openshift-pod 1/1 Running 0 9m 10.37.0.0 node1
  7. Now curl the pod again and it should still have the same data as before. Note that its IP address may have changed:

    1. # curl http://10.37.0.0
    2. Hello OpenShift!!!
  8. Check that the index.html file was written to GlusterFS storage by doing the following on any of the nodes:

    1. $ mount | grep heketi
    2. /dev/mapper/VolGroup00-LogVol00 on /var/lib/heketi type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    3. /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_1e730a5462c352835055018e1874e578 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_1e730a5462c352835055018e1874e578 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    4. /dev/mapper/vg_f92e09091f6b20ab12b02a2513e4ed90-brick_d8c06e606ff4cc29ccb9d018c73ee292 on /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292 type xfs (rw,noatime,seclabel,nouuid,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
    5. $ cd /var/lib/heketi/mounts/vg_f92e09091f6b20ab12b02a2513e4ed90/brick_d8c06e606ff4cc29ccb9d018c73ee292/brick
    6. $ ls
    7. index.html
    8. $ cat index.html
    9. Hello OpenShift!!!