Dynamic provisioning and creating storage classes

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Overview

The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators (cluster-admin) or Storage Administrators (storage-admin) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources.

The OKD persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure.

Many storage types are available for use as persistent volumes in OKD. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plug-in APIs.

To enable dynamic provisioning, add the openshift_master_dynamic_provisioning_enabled variable to the [OSEv3:vars] section of the Ansible inventory file and set its value to True.

  1. [OSEv3:vars]
  2. openshift_master_dynamic_provisioning_enabled=True

Available dynamically provisioned plug-ins

OKD provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster’s configured provider’s API to create new storage resources:

Storage TypeProvisioner Plug-in NameRequired ConfigurationNotes

OpenStack Cinder

kubernetes.io/cinder

Configuring for OpenStack

AWS Elastic Block Store (EBS)

kubernetes.io/aws-ebs

Configuring for AWS

For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/xxxx,Value=clusterid where xxxx and clusterid are unique per cluster. In versions prior to 3.6, this was Key=KubernetesCluster,Value=clusterid.

GCE Persistent Disk (gcePD)

kubernetes.io/gce-pd

Configuring for GCE

In multi-zone configurations, it is advisable to run one Openshift cluster per GCE project to avoid PVs from getting created in zones where no node from current cluster exists.

GlusterFS

kubernetes.io/glusterfs

Configuring GlusterFS

Ceph RBD

kubernetes.io/rbd

Configuring Ceph RBD

Trident from NetApp

netapp.io/trident

Configuring for Trident

Storage orchestrator for NetApp ONTAP, SolidFire, and E-Series storage.

VMWare vSphere

kubernetes.io/vsphere-volume

Getting Started with vSphere and Kubernetes

Azure Disk

kubernetes.io/azure-disk

Configuring for Azure

Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation.

Defining a StorageClass

StorageClass objects are currently a globally scoped object and need to be created by cluster-admin or storage-admin users.

For GCE and AWS, a default StorageClass is created during OKD installation. You can change the default StorageClass or delete it.

There are currently six plug-ins that are supported. The following sections describe the basic object definition for a StorageClass and specific examples for each of the supported plug-in types.

Basic StorageClass object definition

StorageClass Basic object definition

  1. kind: StorageClass (1)
  2. apiVersion: storage.k8s.io/v1 (2)
  3. metadata:
  4. name: foo (3)
  5. annotations: (4)
  6. ...
  7. provisioner: kubernetes.io/plug-in-type (5)
  8. parameters: (6)
  9. param1: value
  10. ...
  11. paramN: value
1(required) The API object type.
2(required) The current apiVersion.
3(required) The name of the StorageClass.
4(optional) Annotations for the StorageClass
5(required) The type of provisioner associated with this storage class.
6(optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in.

StorageClass annotations

To set a StorageClass as the cluster-wide default:

  1. storageclass.kubernetes.io/is-default-class: "true"

This enables any Persistent Volume Claim (PVC) that does not specify a specific volume to automatically be provisioned through the default StorageClass

Beta annotation storageclass.beta.kubernetes.io/is-default-class is still working. However it will be removed in a future release.

To set a StorageClass description:

  1. kubernetes.io/description: My StorageClass Description

OpenStack Cinder object definition

cinder-storageclass.yaml

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:
  4. name: gold
  5. provisioner: kubernetes.io/cinder
  6. parameters:
  7. type: fast (1)
  8. availability: nova (2)
  9. fsType: ext4 (3)
1Volume type created in Cinder. Default is empty.
2Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OKD cluster has a node.
3File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4.

AWS ElasticBlockStore (EBS) object definition

aws-ebs-storageclass.yaml

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:
  4. name: slow
  5. provisioner: kubernetes.io/aws-ebs
  6. parameters:
  7. type: io1 (1)
  8. zone: us-east-1d (2)
  9. iopsPerGB: "10" (3)
  10. encrypted: "true" (4)
  11. kmsKeyId: keyvalue (5)
  12. fsType: ext4 (6)
1Select from io1, gp2, sc1, st1. The default is gp2. See AWS documentation for valid Amazon Resource Name (ARN) values.
2AWS zone. If no zone is specified, volumes are generally round-robined across all active zones where the OKD cluster has a node. Zone and zones parameters must not be used at the same time.
3Only for io1 volumes. I/O operations per second per GiB. The AWS volume plug-in multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See AWS documentation for further details.
4Denotes whether to encrypt the EBS volume. Valid values are true or false.
5Optional. The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true, then AWS generates a key. See AWS documentation for a valid ARN value.
6File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4.

GCE PersistentDisk (gcePD) object definition

gce-pd-storageclass.yaml

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:
  4. name: slow
  5. provisioner: kubernetes.io/gce-pd
  6. parameters:
  7. type: pd-standard (1)
  8. zone: us-central1-a (2)
  9. zones: us-central1-a, us-central1-b, us-east1-b (3)
  10. fsType: ext4 (4)
1Select either pd-standard or pd-ssd. The default is pd-ssd.
2GCE zone. If no zone is specified, volumes are generally round-robined across all active zones where the OKD cluster has a node. Zone and zones parameters must not be used at the same time.
3A comma-separated list of GCE zone(s). If no zone is specified, volumes are generally round-robined across all active zones where the OKD cluster has a node. Zone and zones parameters must not be used at the same time.
4File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4.

GlusterFS object definition

glusterfs-storageclass.yaml

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:
  4. name: slow
  5. provisioner: kubernetes.io/glusterfs
  6. parameters: (1)
  7. resturl: http://127.0.0.1:8081 (2)
  8. restuser: admin (3)
  9. secretName: heketi-secret (4)
  10. secretNamespace: default (5)
  11. gidMin: "40000" (6)
  12. gidMax: "50000" (7)
  13. volumeoptions: group metadata-cache, nl-cache on (8)
  14. volumetype: replicate:3 (9)
  15. volumenameprefix: custom (10)
1Listed are mandatory and a few optional parameters. Please refer to Registering a Storage Class for additional parameters.
2heketi (volume management REST service for Gluster) URL that provisions GlusterFS volumes on demand. The general format should be {http/https}://{IPaddress}:{Port}. This is a mandatory parameter for the GlusterFS dynamic provisioner. If the heketi service is exposed as a routable service in the OKD, it will have a resolvable fully qualified domain name (FQDN) and heketi service URL.
3heketi user who has access to create volumes. Usually “admin”.
4Identification of a Secret that contains a user password to use when talking to heketi. Optional; an empty password will be used when both secretNamespace and secretName are omitted. The provided secret must be of type “kubernetes.io/glusterfs”.
5The namespace of mentioned secretName. Optional; an empty password will be used when both secretNamespace and secretName are omitted. The provided secret must be of type “kubernetes.io/glusterfs”.
6Optional. The minimum value of the GID range for volumes of this StorageClass.
7Optional. The maximum value of the GID range for volumes of this StorageClass.
8Optional. Options for newly created volumes. It allows for performance tuning. See Tuning Volume Options for more GlusterFS volume options.
9Optional. The type of volume to use.
10Optional. Enables custom volume name support using the following format: <volumenameprefix><namespace><claimname>_UUID. If you create a new PVC called myclaim in your project project1 using this storageClass, the volume name will be custom-project1-myclaim-UUID.

When the gidMin and gidMax values are not specified, their defaults are 2000 and 2147483647, respectively. Each dynamically provisioned volume will be given a GID in this range (gidMin-gidMax). This GID is released from the pool when the respective volume is deleted. The GID pool is per StorageClass. If two or more storage classes have GID ranges that overlap there may be duplicate GIDs dispatched by the provisioner.

When heketi authentication is used, a Secret containing the admin key should also exist:

heketi-secret.yaml

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: heketi-secret
  5. namespace: default
  6. data:
  7. key: bXlwYXNzd29yZA== (1)
  8. type: kubernetes.io/glusterfs
1base64 encoded password, for example: echo -n “mypassword” | base64

When the PVs are dynamically provisioned, the GlusterFS plug-in automatically creates an Endpoints and a headless Service named gluster-dynamic-<claimname>. When the PVC is deleted, these dynamic resources are deleted automatically.

Ceph RBD object definition

ceph-storageclass.yaml

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: fast
  5. provisioner: kubernetes.io/rbd
  6. parameters:
  7. monitors: 10.16.153.105:6789 (1)
  8. adminId: admin (2)
  9. adminSecretName: ceph-secret (3)
  10. adminSecretNamespace: kube-system (4)
  11. pool: kube (5)
  12. userId: kube (6)
  13. userSecretName: ceph-secret-user (7)
  14. fsType: ext4 (8)
1Ceph monitors, comma-delimited. It is required.
2Ceph client ID that is capable of creating images in the pool. Default is “admin”.
3Secret Name for adminId. It is required. The provided secret must have type “kubernetes.io/rbd”.
4The namespace for adminSecret. Default is “default”.
5Ceph RBD pool. Default is “rbd”.
6Ceph client ID that is used to map the Ceph RBD image. Default is the same as adminId.
7The name of Ceph Secret for userId to map Ceph RBD image. It must exist in the same namespace as PVCs. It is required.
8File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4.

Trident object definition

trident.yaml

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: gold
  5. provisioner: netapp.io/trident (1)
  6. parameters: (2)
  7. media: "ssd"
  8. provisioningType: "thin"
  9. snapshots: "true"

Trident uses the parameters as selection criteria for the different pools of storage that are registered with it. Trident itself is configured separately.

1For more information about installing Trident with OKD, see the Trident documentation.
2For more information about supported parameters, see the storage attributes section of the Trident documentation.

VMware vSphere object definition

vsphere-storageclass.yaml

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1beta1
  3. metadata:
  4. name: slow
  5. provisioner: kubernetes.io/vsphere-volume (1)
  6. parameters:
  7. diskformat: thin (2)
1For more information about using VMWare vSphere with OKD, see the VMWare vSphere documentation.
2diskformat: thin, zeroedthick and eagerzeroedthick. See vSphere docs for details. Default: thin

Azure File object definition

To configure Azure file dynamic provisioning:

  1. Create the role in the user’s project:

    1. $ cat azf-role.yaml
    2. apiVersion: rbac.authorization.k8s.io/v1
    3. kind: Role
    4. metadata:
    5. name: system:controller:persistent-volume-binder
    6. namespace: <user's project name>
    7. rules:
    8. - apiGroups: [""]
    9. resources: ["secrets"]
    10. verbs: ["create", "get", "delete"]
  2. Create the role binding to the persistent-volume-binder service account in the kube-system project:

    1. $ cat azf-rolebind.yaml
    2. apiVersion: rbac.authorization.k8s.io/v1
    3. kind: RoleBinding
    4. metadata:
    5. name: system:controller:persistent-volume-binder
    6. namespace: <user's project>
    7. roleRef:
    8. apiGroup: rbac.authorization.k8s.io
    9. kind: Role
    10. name: system:controller:persistent-volume-binder
    11. subjects:
    12. - kind: ServiceAccount
    13. name: persistent-volume-binder
    14. namespace: kube-system
  3. Add the service account as admin to the user’s project:

    1. $ oc policy add-role-to-user admin system:serviceaccount:kube-system:persistent-volume-binder -n <user's project>
  4. Create a storage class for the Azure file:

    1. $ cat azfsc.yaml | oc create -f -
    2. kind: StorageClass
    3. apiVersion: storage.k8s.io/v1
    4. metadata:
    5. name: azfsc
    6. provisioner: kubernetes.io/azure-file
    7. mountOptions:
    8. - dir_mode=0777
    9. - file_mode=0777

The user can now create a PVC that uses this storage class.

Azure Disk object definition

azure-advanced-disk-storageclass.yaml

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. metadata:
  4. name: slow
  5. provisioner: kubernetes.io/azure-disk
  6. parameters:
  7. storageAccount: azure_storage_account_name (1)
  8. storageaccounttype: Standard_LRS (2)
  9. kind: Dedicated (3)
1Azure storage account name. This must reside in the same resource group as the cluster. If a storage account is specified, the location is ignored. If a storage account is not specified, a new storage account gets created in the same resource group as the cluster. If you are specifying a storageAccount, the value for kind must be Dedicated.
2Azure storage account SKU tier. Default is empty. Note: Premium VM can attach both Standard_LRS and Premium_LRS disks, Standard VM can only attach Standard_LRS disks, Managed VM can only attach managed disks, and unmanaged VM can only attach unmanaged disks.
3Possible values are Shared (default), Dedicated, and Managed.
  1. If kind is set to Shared, Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster.

  2. If kind is set to Managed, Azure creates new managed disks.

  3. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work:

    • The specified storage account must be in the same region.

    • Azure Cloud Provider must have a write access to the storage account.

  4. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster.

Azure StorageClass is revised in OKD version 3.7. If you upgraded from a previous version, either:

  • specify the property kind: dedicated to continue using the Azure StorageClass created before the upgrade. Or,

  • add the location parameter (for example, “location”: “southcentralus”,) in the azure.conf file to use the default property kind: shared. Doing this creates new storage accounts for future use.

Changing the default StorageClass

If you are using GCE and AWS, use the following process to change the default StorageClass:

  1. List the StorageClass:

    1. $ oc get storageclass
    2. NAME TYPE
    3. gp2 (default) kubernetes.io/aws-ebs (1)
    4. standard kubernetes.io/gce-pd
    1(default) denotes the default StorageClass.
  2. Change the value of the annotation storageclass.kubernetes.io/is-default-class to false for the default StorageClass:

    1. $ oc patch storageclass gp2 -p '{"metadata": {"annotations": \
    2. {"storageclass.kubernetes.io/is-default-class": "false"}}}'
  3. Make another StorageClass the default by adding or modifying the annotation as storageclass.kubernetes.io/is-default-class=true.

    1. $ oc patch storageclass standard -p '{"metadata": {"annotations": \
    2. {"storageclass.kubernetes.io/is-default-class": "true"}}}'

If more than one StorageClass is marked as default, a PVC can only be created if the storageClassName is explicitly specified. Therefore, only one StorageClass should be set as the default.

  1. Verify the changes:

    1. $ oc get storageclass
    2. NAME TYPE
    3. gp2 kubernetes.io/aws-ebs
    4. standard (default) kubernetes.io/gce-pd

Additional information and examples