Configuring for VMware vSphere

You can configure OKD to use VMware vSphere VMDKs as to back PersistentVolumes. This configuration can include using VMware vSphere VMDKs as persistent storage for application data.

The vSphere Cloud Provider allows using vSphere-managed storage in OKD and supports every storage primitive that Kubernetes uses:

  • PersistentVolume (PV)

  • PersistentVolumesClaim (PVC)

  • StorageClass

PersistentVolumes requested by stateful containerized applications can be provisioned on VMware vSAN, VVOL, VMFS, or NFS datastores.

Kubernetes PVs are defined in Pod specifications. They can reference VMDK files directly if you use Static Provisioning or PVCs when you use Dynamic Provisioning, which is preferred.

The latest updates to the vSphere Cloud Provider are in vSphere Storage for Kubernetes.

Before you begin

Requirements

VMware vSphere

Standalone ESXi is not supported.

  • vSphere version 6.0.x minimum recommended version 6.7 U1b is required if you intend to support a complete VMware Validate Design.

  • vSAN, VMFS and NFS supported.

    • vSAN support is limited to one cluster in one vCenter.

Prerequisites

You must install the VMware Tools on each Node VM. See Installing VMware tools for more information.

You can use the open source VMware govmomi CLI tool for additional configuration and troubleshooting. For example, see the following govc CLI configuration:

  1. export GOVC_URL='vCenter IP OR FQDN'
  2. export GOVC_USERNAME='vCenter User'
  3. export GOVC_PASSWORD='vCenter Password'
  4. export GOVC_INSECURE=1

Permissions

Create and assign roles to the vSphere Cloud Provider. A vCenter user with the required set of privileges is required.

In general, the vSphere user designated to the vSphere Cloud Provider must have the following permissions:

  • Read permission on the parent entities of the node VMs such as folder, host, datacenter, datastore folder, datastore cluster, and so on.

  • VirtualMachine.Inventory.Create/Delete permission on the vsphere.conf defined resource pool - this is used to create and delete test VMs.

See the vSphere Documentation Center for steps to create a custom role, user, and role assignment.

vSphere Cloud Provider supports OKD clusters that span multiple vCenters. Make sure that all above privileges are correctly set for all vCenters.

Dynamic provisioning permissions

Dynamic persistent volume creation is the recommended practice.

RolesPrivilegesEntitiesPropagate to children

manage-k8s-node-vms

Resource.AssignVMToPool, VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.RemoveDisk, VirtualMachine.Inventory.Create, VirtualMachine.Inventory.Delete, VirtualMachine.Config.Settings

Cluster, Hosts, VM Folder

Yes

manage-k8s-volumes

Datastore.AllocateSpace, Datastore.FileManagement (Low level file operations)

Datastore

No

k8s-system-read-and-spbm-profile-view

StorageProfile.View (Profile-driven storage view)

vCenter

No

Read-only (pre-existing default role)

System.Anonymous, System.Read, System.View

Datacenter, Datastore Cluster, Datastore Storage Folder

No

Static provisioning permissions

Datastore.FileManagement is required for only the manage-k8s-volumes role, if you create PVCs to bind with statically provisioned PVs and set the reclaim policy to delete. When the PVC is deleted, associated statically provisioned PVs are also deleted.

RolesPrivilegesEntitiesPropergate to Children

manage-k8s-node-vms

VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.RemoveDisk

VM Folder

Yes

manage-k8s-volumes

Datastore.FileManagement (Low level file operations)

Datastore

No

Read-only (pre-existing default role)

System.Anonymous, System.Read, System.View

vCenter, Datacenter, Datastore Cluster, Datastore Storage Folder, Cluster, Hosts

No …​

Procedure

  1. Create a VM folder and move OKD Node VMs to this folder.

  2. Set the disk.EnableUUID parameter to true for each Node VM. This setting ensures that the VMware vSphere’s Virtual Machine Disk (VMDK) always presents a consistent UUID to the VM, allowing the disk to be mounted properly.

    Every VM node that will be participating in the cluster must have the disk.EnableUUID parameter set to true. To set this value, follow the steps for either the vSphere console or govc CLI tool:

    1. From the vSphere HTML Client navigate to VM propertiesVM OptionsAdvancedConfiguration Parametersdisk.enableUUID=TRUE

    2. Or using the govc CLI, find the Node VM paths:

      1. $govc ls /datacenter/vm/<vm-folder-name>
      1. Set disk.EnableUUID to true for all VMs:

        1. $govc vm.change -e="disk.enableUUID=1" -vm='VM Path'

If OKD node VMs are created from a virtual machine template, then you can set disk.EnableUUID=1 on the template VM. VMs cloned from this template inherit this property.

Configuring OKD for vSphere

You can configure OKD for vSphere in two ways:

Option 1: Configuring OKD for vSphere using Ansible

You can configure OKD for VMware vSphere (VCP) by modifying the Ansible inventory file. These changes can be made before installation, or to an existing cluster.

Procedure

  1. Add the following to the Ansible inventory file:

    1. [OSEv3:vars]
    2. openshift_cloudprovider_kind=vsphere
    3. openshift_cloudprovider_vsphere_username=administrator@vsphere.local (1)
    4. openshift_cloudprovider_vsphere_password=<password>
    5. openshift_cloudprovider_vsphere_host=10.x.y.32 (2)
    6. openshift_cloudprovider_vsphere_datacenter=<Datacenter> (3)
    7. openshift_cloudprovider_vsphere_datastore=<Datastore> (4)
    1The user name with the appropriate permissions to create and attach disks in vSphere.
    2The vCenter server address.
    3The vCenter Datacenter name where the OKD VMs are located.
    4The datastore used for creating VMDKs.
  2. Run the deploy_cluster.yml playbook.

    1. $ ansible-playbook -i <inventory_file> \
    2. playbooks/deploy_cluster.yml

Installing with Ansible also creates and configures the following files to fit your vSphere environment:

  • /etc/origin/cloudprovider/vsphere.conf

  • /etc/origin/master/master-config.yaml

  • /etc/origin/node/node-config.yaml

As a reference, a full inventory is shown as follows:

The openshift_cloudprovider_vsphere_ values are required for OKD to be able to create vSphere resources such as VMDKs on datastores for persistent volumes.

  1. $ cat /etc/ansible/hosts
  2. [OSEv3:children]
  3. ansible
  4. masters
  5. infras
  6. apps
  7. etcd
  8. nodes
  9. lb
  10. [OSEv3:vars]
  11. become=yes
  12. ansible_become=yes
  13. ansible_user=root
  14. oreg_auth_user=service_account (1)
  15. oreg_auth_password=service_account_token (1)
  16. openshift_deployment_type=openshift-enterprise
  17. # Required per https://access.redhat.com/solutions/3480921
  18. oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version}
  19. openshift_examples_modify_imagestreams=true
  20. # vSphere Cloud provider
  21. openshift_cloudprovider_kind=vsphere
  22. openshift_cloudprovider_vsphere_username="administrator@vsphere.local"
  23. openshift_cloudprovider_vsphere_password="password"
  24. openshift_cloudprovider_vsphere_host="vcsa65-dc1.example.com"
  25. openshift_cloudprovider_vsphere_datacenter=Datacenter
  26. openshift_cloudprovider_vsphere_cluster=Cluster
  27. openshift_cloudprovider_vsphere_resource_pool=ResourcePool
  28. openshift_cloudprovider_vsphere_datastore="datastore"
  29. openshift_cloudprovider_vsphere_folder="folder"
  30. # Service catalog
  31. openshift_hosted_etcd_storage_kind=dynamic
  32. openshift_hosted_etcd_storage_volume_name=etcd-vol
  33. openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
  34. openshift_hosted_etcd_storage_volume_size=1G
  35. openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
  36. openshift_master_ldap_ca_file=/home/cloud-user/mycert.crt
  37. openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}]
  38. # Setup vsphere registry storage
  39. openshift_hosted_registry_storage_kind=vsphere
  40. openshift_hosted_registry_storage_access_modes=['ReadWriteOnce']
  41. openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume']
  42. openshift_hosted_registry_replicas=1
  43. openshift_hosted_router_replicas=3
  44. openshift_master_cluster_method=native
  45. openshift_node_local_quota_per_fsgroup=512Mi
  46. default_subdomain=example.com
  47. openshift_master_cluster_hostname=openshift.example.com
  48. openshift_master_cluster_public_hostname=openshift.example.com
  49. openshift_master_default_subdomain=apps.example.com
  50. os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy'
  51. osm_use_cockpit=true
  52. # Red Hat subscription name and password
  53. rhsub_user=username
  54. rhsub_pass=password
  55. rhsub_pool=8a85f9815e9b371b015e9b501d081d4b
  56. # metrics
  57. openshift_metrics_install_metrics=true
  58. openshift_metrics_storage_kind=dynamic
  59. openshift_metrics_storage_volume_size=25Gi
  60. # logging
  61. openshift_logging_install_logging=true
  62. openshift_logging_es_pvc_dynamic=true
  63. openshift_logging_es_pvc_size=30Gi
  64. openshift_logging_elasticsearch_storage_type=pvc
  65. openshift_logging_es_cluster_size=1
  66. openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}
  67. openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"}
  68. openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"}
  69. openshift_logging_fluentd_nodeselector={"node-role.kubernetes.io/infra": "true"}
  70. openshift_logging_storage_kind=dynamic
  71. #registry
  72. openshift_public_hostname=openshift.example.com
  73. [ansible]
  74. localhost
  75. [masters]
  76. master-0.example.com vm_name=master-0 ipv4addr=10.x.y.103
  77. master-1.example.com vm_name=master-1 ipv4addr=10.x.y.104
  78. master-2.example.com vm_name=master-2 ipv4addr=10.x.y.105
  79. [infras]
  80. infra-0.example.com vm_name=infra-0 ipv4addr=10.x.y.100
  81. infra-1.example.com vm_name=infra-1 ipv4addr=10.x.y.101
  82. infra-2.example.com vm_name=infra-2 ipv4addr=10.x.y.102
  83. [apps]
  84. app-0.example.com vm_name=app-0 ipv4addr=10.x.y.106
  85. app-1.example.com vm_name=app-1 ipv4addr=10.x.y.107
  86. app-2.example.com vm_name=app-2 ipv4addr=10.x.y.108
  87. [etcd]
  88. master-0.example.com
  89. master-1.example.com
  90. master-2.example.com
  91. [lb]
  92. haproxy-0.example.com vm_name=haproxy-0 ipv4addr=10.x.y.200
  93. [nodes]
  94. master-0.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true
  95. master-1.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true
  96. master-2.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true
  97. infra-0.example.com openshift_node_group_name="node-config-infra"
  98. infra-1.example.com openshift_node_group_name="node-config-infra"
  99. infra-2.example.com openshift_node_group_name="node-config-infra"
  100. app-0.example.com openshift_node_group_name="node-config-compute"
  101. app-1.example.com openshift_node_group_name="node-config-compute"
  102. app-2.example.com openshift_node_group_name="node-config-compute"
1If you use a container registry that requires authentication, such as the default container image registry, specify the credentials for that account. See Accessing and Configuring the Red Hat Registry.

Deploying a vSphere VM environment is not officially supported by Red Hat, but it can be configured.

Option 2: Manually configuring OKD for vSphere

Manually configuring master hosts for vSphere

Perform the following on all master hosts.

Procedure

  1. Edit the master configuration file at /etc/origin/master/master-config.yaml by default on all masters and update the contents of the apiServerArguments and controllerArguments sections:

    1. kubernetesMasterConfig:
    2. ...
    3. apiServerArguments:
    4. cloud-provider:
    5. - "vsphere"
    6. cloud-config:
    7. - "/etc/origin/cloudprovider/vsphere.conf"
    8. controllerArguments:
    9. cloud-provider:
    10. - "vsphere"
    11. cloud-config:
    12. - "/etc/origin/cloudprovider/vsphere.conf"

    When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/.

  2. When you configure OKD for vSphere using Ansible, the /etc/origin/cloudprovider/vsphere.conf file is created automatically. Because you are manually configuring OKD for vSphere, you must create the file. Before you create the file, decide if you want multiple vCenter zones or not.

    The cluster installation process configures single-zone or single vCenter by default. However, deploying OKD in vSphere on different zones can be helpful to avoid single-point-of-failures, but creates the need for shared storage across zones. If an OKD node host goes down in zone “A” and the pods should be moved to zone “B”. See Multiple zone limitations in the Kubernetes documentation for more information.

    • To configure a single vCenter server, use the following format for the /etc/origin/cloudprovider/vsphere.conf file:

      1. [Global] (1)
      2. user = "myusername" (2)
      3. password = "mypassword" (3)
      4. port = "443" (4)
      5. insecure-flag = "1" (5)
      6. datacenters = "mydatacenter" (6)
      7. [VirtualCenter "10.10.0.2"] (7)
      8. user = "myvCenterusername"
      9. password = "password"
      10. [Workspace] (8)
      11. server = "10.10.0.2" (9)
      12. datacenter = "mydatacenter"
      13. folder = "path/to/vms" (10)
      14. default-datastore = "shared-datastore" (11)
      15. resourcepool-path = "myresourcepoolpath" (12)
      16. [Disk]
      17. scsicontrollertype = pvscsi (13)
      18. [Network]
      19. public-network = "VM Network" (14)
      1Any properties set in the [Global] section are used for all specified vcenters unless overriden by the settings in the individual [VirtualCenter] sections.
      2vCenter username for the vSphere cloud provider.
      3vCenter password for the specified user.
      4Optional. Port number for the vCenter server. Defaults to port 443.
      5Set to 1 if the vCenter uses a self-signed certificate.
      6Name of the data center on which Node VMs are deployed.
      7Override specific [Global] properties for this Virtual Center. Possible setting scan be [Port], [user], [insecure-flag], [datacenters]. Any settings not specified are pulled from the [Global] section.
      8Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others.
      9IP Address or FQDN for the vCenter server.
      10Path to the VM directory for node VMs.
      11Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. Prior to OKD 3.9, if the datastore was located in a storage directory or is a member of a datastore cluster, the full path was required.
      12Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning must be created.
      13Type of SCSI controller the VMDK will be attached to the VM as.
      14Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes.
    • To configure a multiple vCenter servers, use the following format for the /etc/origin/cloudprovider/vsphere.conf file:

      1. [Global] (1)
      2. user = "myusername" (2)
      3. password = "mypassword" (3)
      4. port = "443" (4)
      5. insecure-flag = "1" (5)
      6. datacenters = "us-east, us-west" (6)
      7. [VirtualCenter "10.10.0.2"] (7)
      8. user = "myvCenterusername"
      9. password = "password"
      10. [VirtualCenter "10.10.0.3"]
      11. port = "448"
      12. insecure-flag = "0"
      13. [Workspace] (8)
      14. server = "10.10.0.2" (9)
      15. datacenter = "mydatacenter"
      16. folder = "path/to/vms" (10)
      17. default-datastore = "shared-datastore" (11)
      18. resourcepool-path = "myresourcepoolpath" (12)
      19. [Disk]
      20. scsicontrollertype = pvscsi (13)
      21. [Network]
      22. public-network = "VM Network" (14)
      1Any properties set in the [Global] section are used for all specified vcenters unless overriden by the settings in the individual [VirtualCenter] sections.
      2vCenter username for the vSphere cloud provider.
      3vCenter password for the specified user.
      4Optional. Port number for the vCenter server. Defaults to port 443.
      5Set to 1 if the vCenter uses a self-signed certificate.
      6Name of the data centers on which Node VMs are deployed.
      7Override specific [Global] properties for this Virtual Center. Possible setting scan be [Port], [user], [insecure-flag], [datacenters]. Any settings not specified are pulled from the [Global] section.
      8Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others.
      9IP Address or FQDN for the vCenter server where the Cloud Provider communicates.
      10Path to the VM directory for node VMs.
      11Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. Prior to OKD 3.9, if the datastore was located in a storage directory or is a member of a datastore cluster, the full path was required.
      12Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning must be created.
      13Type of SCSI controller the VMDK will be attached to the VM as.
      14Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes.
  1. Restart the OKD host services:

    1. # master-restart api
    2. # master-restart controllers
    3. # systemctl restart atomic-openshift-node

Manually configuring node hosts for vSphere

Perform the following on all node hosts.

Procedure

To configure the OKD nodes for vSphere:

  1. Edit the appropriate node configuration map and update the contents of the **kubeletArguments** section:

    1. kubeletArguments:
    2. cloud-provider:
    3. - "vsphere"
    4. cloud-config:
    5. - "/etc/origin/cloudprovider/vsphere.conf"

    The nodeName must match the VM name in vSphere in order for the cloud provider integration to work properly. The name must also be RFC1123 compliant.

  2. Restart the OKD services on all nodes.

    1. # systemctl restart atomic-openshift-node

Applying Configuration Changes

Start or restart OKD services on all master and node hosts to apply your configuration changes, see Restarting OKD services:

  1. # master-restart api
  2. # master-restart controllers
  3. # systemctl restart atomic-openshift-node

Kubernetes architecture expects reliable endpoints from cloud providers. When a cloud provider is down, the kubelet prevents OKD from restarting. If the underlying cloud provider endpoints are not reliable, do not install a cluster that uses the cloud provider integration. Install the cluster as if it is a bare metal environment. It is not recommended to toggle cloud provider integration on or off in an installed cluster. However, if that scenario is unavoidable, then complete the following process.

Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the **externalID** (which would have been the case when no cloud provider was being used) to using the cloud provider’s **instance-id** (which is what the cloud provider specifies). To resolve this issue:

  1. Log in to the CLI as a cluster administrator.

  2. Check and back up existing node labels:

    1. $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)'
  3. Delete the nodes:

    1. $ oc delete node <node_name>
  4. On each node host, restart the OKD service.

    1. # systemctl restart origin-node
  5. Add back any labels on each node that you previously had.

Configuring OKD to use vSphere storage

OKD supports VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OKD cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed.

OKD creates the disk in vSphere and attaches the disk to the correct instance.

The OKD persistent volume (PV) framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. vSphere VMDK volumes can be provisioned dynamically.

PVs are not bound to a single project or namespace; they can be shared across the OKD cluster. PV claims, however, are specific to a project or namespace and can be requested by users.

High availability of storage in the infrastructure is left to the underlying storage provider.

Prerequisites

Before creating PVs using vSphere, ensure your OKD cluster meets the following requirements:

  • OKD must first be configured for vSphere.

  • Each node host in the infrastructure must match the vSphere VM name.

  • Each node host must be in the same resource group.

Dynamically Provisioning VMware vSphere volumes

Dynamically provisioning VMware vSphere volumes is the preferred provisioning method.

  1. If you did not specify the openshift_cloudprovider_kind=vsphere and openshift_vsphere_* variables in the Ansible inventory file when you provisioned the cluster, you must manually create the following StorageClass to use the vsphere-volume provisioner:

    1. $ oc get --export storageclass vsphere-standard -o yaml
    2. kind: StorageClass
    3. apiVersion: storage.k8s.io/v1
    4. metadata:
    5. name: "vsphere-standard" (1)
    6. provisioner: kubernetes.io/vsphere-volume (2)
    7. parameters:
    8. diskformat: thin (3)
    9. datastore: "YourvSphereDatastoreName" (4)
    10. reclaimPolicy: Delete
    1The name of the StorageClass.
    2The type of storage provisioner. Specify vsphere-volume.
    3The type of disk. Specify either zeroedthick or thin.
    4The source datastore where the disks will be created.
  2. After you request a PV, using the StorageClass shown in the previous step, OKD automatically creates VMDK disks in the vSphere infrastructure. To verify that the disks were created, use the Datastore browser in vSphere.

    vSphere-volume disks are ReadWriteOnce access mode, which means the volume can be mounted as read-write by a single node. See the Access modes section of the Architecture guide for more information.

Statically Provisioning VMware vSphere volumes

Storage must exist in the underlying infrastructure before it can be mounted as a volume in OKD. After ensuring OKD is configured for vSphere, all that is required for OKD and vSphere is a VM folder path, file system type, and the PersistentVolume API.

Creating PersistentVolumes

  1. Define a PV object definition, for example vsphere-pv.yaml:

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: pv0001 (1)
    5. spec:
    6. capacity:
    7. storage: 2Gi (2)
    8. accessModes:
    9. - ReadWriteOnce
    10. persistentVolumeReclaimPolicy: Retain
    11. vsphereVolume: (3)
    12. volumePath: "[datastore1] volumes/myDisk" (4)
    13. fsType: ext4 (5)
    1The name of the volume. This must be how it is identified by PV claims or from pods.
    2The amount of storage allocated to this volume.
    3The volume type being used. This example uses vsphereVolume. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore.
    4The existing VMDK volume to use. You must enclose the datastore name in square brackets ([]) in the volume definition, as shown.
    5The file system type to mount. For example, ext4, xfs, or other file-systems.

    Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure.

  2. Create the PV:

    1. $ oc create -f vsphere-pv.yaml
    2. persistentvolume "pv0001" created
  3. Verify that the PV was created:

    1. $ oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. pv0001 <none> 2Gi RWO Available 2s

Now you can request storage using PV claims, which can now use your PV.

PV claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a PV from a different namespace causes the pod to fail.

Formatting VMware vSphere volumes

Before OKD mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the given file system.

Because OKD formats them before the first use, you can use unformatted vSphere volumes as PVs.

Configuring the OKD registry for vSphere

Configuring the OKD registry for vSphere using Ansible

Procedure

To configure the Ansible inventory for the registry to use a vSphere volume:

  1. [OSEv3:vars]
  2. # vSphere Provider Configuration
  3. openshift_hosted_registry_storage_kind=vsphere (1)
  4. openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] (2)
  5. openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume'] (3)
  6. openshift_hosted_registry_replicas=1 (4)
1The storage type.
2vSphere volumes only support RWO.
3The annotation for the volume.
4The number of replicas to configure.

The brackets in the configuration file above are required.

Dynamically provisioning storage for OKD registry

To use vSphere volume storage, edit the registry’s configuration file and mount to the registry pod.

Procedure

  1. Create a new configuration file from the vSphere volume:

    1. kind: PersistentVolumeClaim
    2. apiVersion: v1
    3. metadata:
    4. name: vsphere-registry-storage
    5. annotations:
    6. volume.beta.kubernetes.io/storage-class: vsphere-standard
    7. spec:
    8. accessModes:
    9. - ReadWriteOnce
    10. resources:
    11. requests:
    12. storage: 30Gi
  2. Create the file in OKD:

    1. $ oc create -f pvc-registry.yaml
  3. Update the volume configuration to use the new PVC:

    1. $ oc set volume dc docker-registry --add --name=registry-storage -t \
    2. pvc --claim-name=vsphere-registry-storage --overwrite
  4. Redeploy the registry to read the updated configuration:

    1. $ oc rollout latest docker-registry -n default
  5. Verify the volume has been assigned:

    1. $ oc set volume dc docker-registry -n default

Manually provisioning storage for OKD registry

Running the following commands manually creates storage, which is used to create storage for the registry if a StorageClass is unavailable or not used.

  1. # VMFS
  2. cd /vmfs/volumes/datastore1/
  3. mkdir kubevols # Not needed but good hygiene
  4. # VSAN
  5. cd /vmfs/volumes/vsanDatastore/
  6. /usr/lib/vmware/osfs/bin/osfs-mkdir kubevols # Needed
  7. cd kubevols
  8. vmkfstools -c 25G registry.vmdk

About Red Hat OpenShift Container Storage

Red Hat OpenShift Container Storage (RHOCS) is a provider of agnostic persistent storage for OKD either in-house or in hybrid clouds. As a Red Hat storage solution, RHOCS is completely integrated with OKD for deployment, management, and monitoring regardless if it is installed on OKD (converged) or with OKD (independent). OpenShift Container Storage is not limited to a single availability zone or node, which makes it likely to survive an outage. You can find complete instructions for using RHOCS in the RHOCS3.11 Deployment Guide.

Backup of persistent volumes

OKD provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots.

To create a backup of PVs:

  1. Stop the application using the PV.

  2. Clone the persistent disk.

  3. Restart the application.

  4. Create a backup of the cloned disk.

  5. Delete the cloned disk.