Selector-Label Volume Binding

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Overview

This guide provides the steps necessary to enable binding of persistent volume claims (PVCs) to persistent volumes (PVs) via selector and label attributes. By implementing selectors and labels, regular users are able to target provisioned storage by identifiers defined by a cluster administrator.

Motivation

In cases of statically provisioned storage, developers seeking persistent storage are required to know a handful identifying attributes of a PV in order to deploy and bind a PVC. This creates several problematic situations. Regular users might have to contact a cluster administrator to either deploy the PVC or provide the PV values. PV attributes alone do not convey the intended use of the storage volumes, nor do they provide methods by which volumes can be grouped.

Selector and label attributes can be used to abstract away PV details from the user while providing cluster administrators a way of identifying volumes by a descriptive and customizable tag. Through the selector-label method of binding, users are only required to know which labels are defined by the administrator.

The selector-label feature is currently only available for statically provisioned storage and is currently not implemented for storage provisioned dynamically.

Deployment

This section reviews how to define and deploy PVCs.

Prerequisites

  1. A running OKD 3.3+ cluster

  2. A volume provided by a supported storage provider

  3. A user with a cluster-admin role binding

Define the Persistent Volume and Claim

  1. As the cluser-admin user, define the PV. For this example, we will be using a GlusterFS volume. See the appropriate storage provider for your provider’s configuration.

    Example 1. Persistent Volume with Labels

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: gluster-volume
    5. labels: (1)
    6. volume-type: ssd
    7. aws-availability-zone: us-east-1
    8. spec:
    9. capacity:
    10. storage: 2Gi
    11. accessModes:
    12. - ReadWriteMany
    13. glusterfs:
    14. endpoints: glusterfs-cluster
    15. path: myVol1
    16. readOnly: false
    17. persistentVolumeReclaimPolicy: Recycle
    1A PVC whose selectors match all of a PV’s labels will be bound, assuming a PV is available.
  2. Define the PVC:

    Example 2. Persistent Volume Claim with Selectors

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. metadata:
    4. name: gluster-claim
    5. spec:
    6. accessModes:
    7. - ReadWriteMany
    8. resources:
    9. requests:
    10. storage: 1Gi
    11. selector: (1)
    12. matchLabels: (2)
    13. volume-type: ssd
    14. aws-availability-zone: us-east-1
    1Begin selectors section.
    2List all labels by which the user is requesting storage. Must match all labels of targeted PV.

Deploy the Persistent Volume and Claim

As the cluster-admin user, create the persistent volume:

Example 3. Create the Persistent Volume

  1. # oc create -f gluster-pv.yaml
  2. persistentVolume "gluster-volume" created
  3. # oc get pv
  4. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
  5. gluster-volume map[] 2147483648 RWX Available 2s

Once the PV is created, any user whose selectors match all its labels can create their PVC.

Example 4. Create the Persistent Volume Claim

  1. # oc create -f gluster-pvc.yaml
  2. persistentVolumeClaim "gluster-claim" created
  3. # oc get pvc
  4. NAME LABELS STATUS VOLUME
  5. gluster-claim Bound gluster-volume