GCP PD CSI Driver Operator

Overview

OKD can provision persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Google Cloud Platform (GCP) persistent disk (PD) storage.

GCP PD CSI Driver Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a Container Storage Interface (CSI) Operator and driver.

To create CSI-provisioned persistent volumes (PVs) that mount to GCP PD storage assets, OKD installs the GCP PD CSI Driver Operator and the GCP PD CSI driver by default in the openshift-cluster-csi-drivers namespace.

  • GCP PD CSI Driver Operator: By default, the Operator provides a storage class that you can use to create PVCs. You also have the option to create the GCP PD storage class as described in Persistent storage using GCE Persistent Disk.

  • GCP PD driver: The driver enables you to create and mount GCP PD PVs.

OKD defaults to using an in-tree, or non-CSI, driver to provision GCP PD storage. This in-tree driver will be removed in a subsequent update of OKD. Volumes provisioned using the existing in-tree driver are planned for migration to the CSI driver at that time.

About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plug-ins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OKD users storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins.

GCP PD CSI driver storage class parameters

The Google Cloud Platform (GCP) persistent disk (PD) Container Storage Interface (CSI) driver uses the CSI external-provisioner sidecar as a controller. This is a separate helper container that is deployed with the CSI driver. The sidecar manages persistent volumes (PVs) by triggering the CreateVolume operation.

The GCP PD CSI driver uses the csi.storage.k8s.io/fstype parameter key to support dynamic provisioning. The following table describes all the GCP PD CSI storage class parameters that are supported by OKD.

Table 1. CreateVolume Parameters
ParameterValuesDefaultDescription

type

pd-ssd or pd-standard

pd-standard

Allows you to choose between standard PVs or solid-state-drive PVs.

replication-type

none or regional-pd

none

Allows you to choose between zonal or regional PVs.

disk-encryption-kms-key

Fully qualified resource identifier for the key to use to encrypt new disks.

Empty string

Uses customer-managed encryption keys (CMEK) to encrypt new disks.

Creating a custom-encrypted persistent volume

When you create a PersistentVolumeClaim object, OKD provisions a new persistent volume (PV) and creates a PersistentVolume object. You can add a custom encryption key in Google Cloud Platform (GCP) to protect a PV in your cluster by encrypting the newly created PV.

For encryption, the newly attached PV that you create uses customer-managed encryption keys (CMEK) on a cluster by using a new or existing Google Cloud Key Management Service (KMS) key.

Prerequisites

  • You are logged in to a running OKD cluster.

  • You have created a Cloud KMS key ring and key version.

For more information about CMEK and Cloud KMS resources, see Using customer-managed encryption keys (CMEK).

Procedure

To create a custom-encrypted PV, complete the following steps:

  1. Create a storage class with the Cloud KMS key. The following example enables dynamic provisioning of encrypted volumes:

    1. apiVersion: storage.k8s.io/v1
    2. kind: StorageClass
    3. metadata:
    4. name: csi-gce-pd-cmek
    5. provisioner: pd.csi.storage.gke.io
    6. volumeBindingMode: "WaitForFirstConsumer"
    7. allowVolumeExpansion: true
    8. parameters:
    9. type: pd-standard
    10. disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> (1)
    1This field must be the resource identifier for the key that will be used to encrypt new disks. Values are case-sensitive. For more information about providing key ID values, see Retrieving a resource’s ID and Getting a Cloud KMS resource ID.

    You cannot add the disk-encryption-kms-key parameter to an existing storage class. However, you can delete the storage class and recreate it with the same name and a different set of parameters. If you do this, the provisioner of the existing class must be pd.csi.storage.gke.io.

  2. Deploy the storage class on your OKD cluster using the oc command:

    1. $ oc describe storageclass csi-gce-pd-cmek

    Example output

    1. Name: csi-gce-pd-cmek
    2. IsDefaultClass: No
    3. Annotations: None
    4. Provisioner: pd.csi.storage.gke.io
    5. Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard
    6. AllowVolumeExpansion: true
    7. MountOptions: none
    8. ReclaimPolicy: Delete
    9. VolumeBindingMode: WaitForFirstConsumer
    10. Events: none
  3. Create a file named pvc.yaml that matches the name of your storage class object that you created in the previous step:

    1. kind: PersistentVolumeClaim
    2. apiVersion: v1
    3. metadata:
    4. name: podpvc
    5. spec:
    6. accessModes:
    7. - ReadWriteOnce
    8. storageClassName: csi-gce-pd-cmek
    9. resources:
    10. requests:
    11. storage: 6Gi

    If you marked the new storage class as default, you can omit the storageClassName field.

  4. Apply the PVC on your cluster:

    1. $ oc apply -f pvc.yaml
  5. Get the status of your PVC and verify that it is created and bound to a newly provisioned PV:

    1. $ oc get pvc

    Example output

    1. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    2. podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s

    If your storage class has the volumeBindingMode field set to WaitForFirstConsumer, you must create a pod to use the PVC before you can verify it.

Your CMEK-protected PV is now ready to use with your OKD cluster.

Additional resources