Mounting Volumes on Privileged Pods

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Overview

Persistent volumes can be mounted to pods with the privileged security context constraint (SCC) attached.

While this topic uses GlusterFS as a sample use-case for mounting volumes onto privileged pods, it can be adapted to use any supported storage plug-in.

Prerequisites

Creating the Persistent Volume

Creating the PersistentVolume makes the storage accessible to users, regardless of projects.

  1. As the admin, create the service, endpoint object, and persistent volume:

    1. $ oc create -f gluster-endpoints-service.yaml
    2. $ oc create -f gluster-endpoints.yaml
    3. $ oc create -f gluster-pv.yaml
  2. Verify that the objects were created:

    1. $ oc get svc
    2. NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
    3. gluster-cluster 172.30.151.58 <none> 1/TCP <none> 24s
    1. $ oc get ep
    2. NAME ENDPOINTS AGE
    3. gluster-cluster 192.168.59.102:1,192.168.59.103:1 2m
    1. $ oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. gluster-default-volume <none> 2Gi RWX Available 2d

Creating a Regular User

Adding a regular user to the privileged SCC (or to a group given access to the SCC) allows them to run privileged pods:

  1. As the admin, add a user to the SCC:

    1. $ oc adm policy add-scc-to-user privileged <username>
  2. Log in as the regular user:

    1. $ oc login -u <username> -p <password>
  3. Then, create a new project:

    1. $ oc new-project <project_name>

Creating the Persistent Volume Claim

  1. As a regular user, create the PersistentVolumeClaim to access the volume:

    1. $ oc create -f gluster-pvc.yaml -n <project_name>
  2. Define your pod to access the claim:

    Example 1. Pod Definition

    1. apiVersion: v1
    2. id: gluster-S3-pvc
    3. kind: Pod
    4. metadata:
    5. name: gluster-nginx-priv
    6. spec:
    7. containers:
    8. - name: gluster-nginx-priv
    9. image: fedora/nginx
    10. volumeMounts:
    11. - mountPath: /mnt/gluster (1)
    12. name: gluster-volume-claim
    13. securityContext:
    14. privileged: true
    15. volumes:
    16. - name: gluster-volume-claim
    17. persistentVolumeClaim:
    18. claimName: gluster-claim (2)
    1Volume mount within the pod.
    2The gluster-claim must reflect the name of the PersistentVolume.
  3. Upon pod creation, the mount directory is created and the volume is attached to that mount point.

    As regular user, create a pod from the definition:

    1. $ oc create -f gluster-S3-pod.yaml
  4. Verify that the pod created successfully:

    1. $ oc get pods
    2. NAME READY STATUS RESTARTS AGE
    3. gluster-S3-pod 1/1 Running 0 36m

    It can take several minutes for the pod to create.

Verifying the Setup

Checking the Pod SCC

  1. Export the pod configuration:

    1. $ oc get -o yaml --export pod <pod_name>
  2. Examine the output. Check that openshift.io/scc has the value of privileged:

    Example 2. Export Snippet

    1. metadata:
    2. annotations:
    3. openshift.io/scc: privileged

Verifying the Mount

  1. Access the pod and check that the volume is mounted:

    1. $ oc rsh <pod_name>
    2. [root@gluster-S3-pvc /]# mount
  2. Examine the output for the Gluster volume:

    Example 3. Volume Mount

    1. 192.168.59.102:gv0 on /mnt/gluster type fuse.gluster (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)