Control plane machine set configuration

These example YAML snippets show the base structure for a control plane machine set custom resource (CR) and platform-specific samples for provider specification and failure domain configurations.

Sample YAML for a control plane machine set custom resource

The base of the ControlPlaneMachineSet CR is structured the same way for all platforms.

Sample ControlPlaneMachineSet CR YAML file

  1. apiVersion: machine.openshift.io/v1
  2. kind: ControlPlaneMachineSet
  3. metadata:
  4. name: cluster (1)
  5. namespace: openshift-machine-api
  6. spec:
  7. replicas: 3 (2)
  8. selector:
  9. matchLabels:
  10. machine.openshift.io/cluster-api-cluster: <cluster_id> (3)
  11. machine.openshift.io/cluster-api-machine-role: master
  12. machine.openshift.io/cluster-api-machine-type: master
  13. state: Active (4)
  14. strategy:
  15. type: RollingUpdate (5)
  16. template:
  17. machineType: machines_v1beta1_machine_openshift_io
  18. machines_v1beta1_machine_openshift_io:
  19. failureDomains:
  20. platform: <platform> (6)
  21. <platform_failure_domains> (7)
  22. metadata:
  23. labels:
  24. machine.openshift.io/cluster-api-cluster: <cluster_id>
  25. machine.openshift.io/cluster-api-machine-role: master
  26. machine.openshift.io/cluster-api-machine-type: master
  27. spec:
  28. providerSpec:
  29. value:
  30. <platform_provider_spec> (8)
1Specifies the name of the ControlPlaneMachineSet CR, which is cluster. Do not change this value.
2Specifies the number of control plane machines. Only clusters with three control plane machines are supported, so the replicas value is 3. Horizontal scaling is not supported. Do not change this value.
3Specifies the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You must specify this value when you create a ControlPlaneMachineSet CR. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
4Specifies the state of the Operator. When the state is Inactive, the Operator is not operational. You can activate the Operator by setting the value to Active.

Before you activate the Operator, you must ensure that the ControlPlaneMachineSet CR configuration is correct for your cluster requirements. For more information about activating the Control Plane Machine Set Operator, see “Getting started with control plane machine sets”.

5Specifies the update strategy for the cluster. The allowed values are OnDelete and RollingUpdate. The default value is RollingUpdate. For more information about update strategies, see “Updating the control plane configuration”.
6Specifies the cloud provider platform name. Do not change this value.
7Specifies the <platform_failure_domains> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample failure domain configuration for your cloud provider.

VMware vSphere does not support failure domains.

8Specifies the <platform_provider_spec> configuration for the cluster. The format and values of this section are provider-specific. For more information, see the sample provider specification for your cloud provider.

Additional resources

Provider-specific configuration

The <platform_provider_spec> and <platform_failure_domains> sections of the control plane machine set resources are provider-specific. Refer to the example YAML for your cluster:

Sample YAML for configuring Amazon Web Services clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippets show provider specification and failure domain configurations for an Amazon Web Services (AWS) cluster.

Sample AWS provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

  1. $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

Sample AWS providerSpec values

  1. providerSpec:
  2. value:
  3. ami:
  4. id: ami-<ami_id_string> (1)
  5. apiVersion: machine.openshift.io/v1beta1
  6. blockDevices:
  7. - ebs: (2)
  8. encrypted: true
  9. iops: 0
  10. kmsKey:
  11. arn: ""
  12. volumeSize: 120
  13. volumeType: gp3
  14. credentialsSecret:
  15. name: aws-cloud-credentials (3)
  16. deviceIndex: 0
  17. iamInstanceProfile:
  18. id: <cluster_id>-master-profile (4)
  19. instanceType: m6i.xlarge (5)
  20. kind: AWSMachineProviderConfig (6)
  21. loadBalancers: (7)
  22. - name: <cluster_id>-int
  23. type: network
  24. - name: <cluster_id>-ext
  25. type: network
  26. metadata:
  27. creationTimestamp: null
  28. metadataServiceOptions: {}
  29. placement: (8)
  30. region: <region> (9)
  31. securityGroups:
  32. - filters:
  33. - name: tag:Name
  34. values:
  35. - <cluster_id>-master-sg (10)
  36. subnet: {} (11)
  37. userDataSecret:
  38. name: master-user-data (12)
1Specifies the Fedora CoreOS (FCOS) Amazon Machine Images (AMI) ID for the cluster. The AMI must belong to the same region as the cluster. If you want to use an AWS Marketplace image, you must complete the OKD subscription from the AWS Marketplace to obtain an AMI ID for your region.
2Specifies the configuration of an encrypted EBS volume.
3Specifies the secret name for the cluster. Do not change this value.
4Specifies the AWS Identity and Access Management (IAM) instance profile. Do not change this value.
5Specifies the AWS instance type for the control plane.
6Specifies the cloud provider platform type. Do not change this value.
7Specifies the internal (int) and external (ext) load balancers for the cluster.
8This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain.
9Specifies the AWS region for the cluster.
10Specifies the control plane machines security group.
11This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain.
12Specifies the control plane user data secret. Do not change this value.

Sample AWS failure domain configuration

The control plane machine set concept of a failure domain is analogous to existing AWS concept of an Availability Zone (AZ). The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring AWS failure domains in the control plane machine set, you must specify the availability zone name and the subnet to use.

Sample AWS failure domain values

  1. failureDomains:
  2. aws:
  3. - placement:
  4. availabilityZone: <aws_zone_a> (1)
  5. subnet: (2)
  6. filters:
  7. - name: tag:Name
  8. values:
  9. - <cluster_id>-private-<aws_zone_a> (3)
  10. type: Filters (4)
  11. - placement:
  12. availabilityZone: <aws_zone_b> (5)
  13. subnet:
  14. filters:
  15. - name: tag:Name
  16. values:
  17. - <cluster_id>-private-<aws_zone_b> (6)
  18. type: Filters
  19. platform: AWS (7)
1Specifies an AWS availability zone for the first failure domain.
2Specifies a subnet configuration. In this example, the subnet type is Filters, so there is a filters stanza.
3Specifies the subnet name for the first failure domain, using the infrastructure ID and the AWS availability zone.
4Specifies the subnet type. The allowed values are: ARN, Filters and ID. The default value is Filters.
5Specifies the subnet name for an additional failure domain, using the infrastructure ID and the AWS availability zone.
6Specifies the cluster’s infrastructure ID and the AWS availability zone for the additional failure domain.
7Specifies the cloud provider platform name. Do not change this value.

Additional resources

Sample YAML for configuring Google Cloud Platform clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippets show provider specification and failure domain configurations for a Google Cloud Platform (GCP) cluster.

Sample GCP provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

Values obtained by using the OpenShift CLI

In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.

Infrastructure ID

The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

  1. $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

Image path

The <path_to_image> string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:

  1. $ oc -n openshift-machine-api \
  2. -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{"\n"}' \
  3. get ControlPlaneMachineSet/cluster

Sample GCP providerSpec values

  1. providerSpec:
  2. value:
  3. apiVersion: machine.openshift.io/v1beta1
  4. canIPForward: false
  5. credentialsSecret:
  6. name: gcp-cloud-credentials (1)
  7. deletionProtection: false
  8. disks:
  9. - autoDelete: true
  10. boot: true
  11. image: <path_to_image> (2)
  12. labels: null
  13. sizeGb: 200
  14. type: pd-ssd
  15. kind: GCPMachineProviderSpec (3)
  16. machineType: e2-standard-4
  17. metadata:
  18. creationTimestamp: null
  19. metadataServiceOptions: {}
  20. networkInterfaces:
  21. - network: <cluster_id>-network
  22. subnetwork: <cluster_id>-master-subnet
  23. projectID: <project_name> (4)
  24. region: <region> (5)
  25. serviceAccounts:
  26. - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com
  27. scopes:
  28. - https://www.googleapis.com/auth/cloud-platform
  29. shieldedInstanceConfig: {}
  30. tags:
  31. - <cluster_id>-master
  32. targetPools:
  33. - <cluster_id>-api
  34. userDataSecret:
  35. name: master-user-data (6)
  36. zone: "" (7)
1Specifies the secret name for the cluster. Do not change this value.
2Specifies the path to the image that was used to create the disk.

To use a GCP Marketplace image, specify the offer to use:

3Specifies the cloud provider platform type. Do not change this value.
4Specifies the name of the GCP project that you use for your cluster.
5Specifies the GCP region for the cluster.
6Specifies the control plane user data secret. Do not change this value.
7This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain.

Sample GCP failure domain configuration

The control plane machine set concept of a failure domain is analogous to the existing GCP concept of a zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring GCP failure domains in the control plane machine set, you must specify the zone name to use.

Sample GCP failure domain values

  1. failureDomains:
  2. gcp:
  3. - zone: <gcp_zone_a> (1)
  4. - zone: <gcp_zone_b> (2)
  5. - zone: <gcp_zone_c>
  6. - zone: <gcp_zone_d>
  7. platform: GCP (3)
1Specifies a GCP zone for the first failure domain.
2Specifies an additional failure domain. Further failure domains are added the same way.
3Specifies the cloud provider platform name. Do not change this value.

Sample YAML for configuring Microsoft Azure clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippets show provider specification and failure domain configurations for an Azure cluster.

Sample Azure provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane Machine CR that is created by the installation program. You can omit any field that is set in the failure domain section of the CR.

In the following example, <cluster_id> is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

  1. $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

Sample Azure providerSpec values

  1. providerSpec:
  2. value:
  3. acceleratedNetworking: true
  4. apiVersion: machine.openshift.io/v1beta1
  5. credentialsSecret:
  6. name: azure-cloud-credentials (1)
  7. namespace: openshift-machine-api
  8. diagnostics: {}
  9. image: (2)
  10. offer: ""
  11. publisher: ""
  12. resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 (3)
  13. sku: ""
  14. version: ""
  15. internalLoadBalancer: <cluster_id>-internal (4)
  16. kind: AzureMachineProviderSpec (5)
  17. location: <region> (6)
  18. managedIdentity: <cluster_id>-identity
  19. metadata:
  20. creationTimestamp: null
  21. name: <cluster_id>
  22. networkResourceGroup: <cluster_id>-rg
  23. osDisk: (7)
  24. diskSettings: {}
  25. diskSizeGB: 1024
  26. managedDisk:
  27. storageAccountType: Premium_LRS
  28. osType: Linux
  29. publicIP: false
  30. publicLoadBalancer: <cluster_id> (8)
  31. resourceGroup: <cluster_id>-rg
  32. subnet: <cluster_id>-master-subnet (9)
  33. userDataSecret:
  34. name: master-user-data (10)
  35. vmSize: Standard_D8s_v3
  36. vnet: <cluster_id>-vnet
  37. zone: "" (11)
1Specifies the secret name for the cluster. Do not change this value.
2Specifies the image details for your control plane machine set.
3Specifies an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix.
4Specifies the internal load balancer for the control plane. This field might not be preconfigured but is required in both the ControlPlaneMachineSet and control plane Machine CRs.
5Specifies the cloud provider platform type. Do not change this value.
6Specifies the region to place control plane machines on.
7Specifies the disk configuration for the control plane.
8Specifies the public load balancer for the control plane.
9Specifies the subnet for the control plane.
10Specifies the control plane user data secret. Do not change this value.
11This parameter is configured in the failure domain, and is shown with an empty value here. If a value specified for this parameter differs from the value in the failure domain, the Operator overwrites it with the value in the failure domain.

Sample Azure failure domain configuration

The control plane machine set concept of a failure domain is analogous to existing Azure concept of an Azure availability zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

When configuring Azure failure domains in the control plane machine set, you must specify the availability zone name.

Sample Azure failure domain values

  1. failureDomains:
  2. azure: (1)
  3. - zone: "1"
  4. - zone: "2"
  5. - zone: "3"
  6. platform: Azure (2)
1Each instance of zone specifies an Azure availability zone for a failure domain.
2Specifies the cloud provider platform name. Do not change this value.

Additional resources

Sample YAML for configuring Nutanix clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippet shows a provider specification configuration for a Nutanix cluster.

Sample Nutanix provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Values obtained by using the OpenShift CLI

In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.

Infrastructure ID

The <cluster_id> string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:

  1. $ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster

Sample Nutanix providerSpec values

  1. providerSpec:
  2. value:
  3. apiVersion: machine.openshift.io/v1
  4. bootType: "" (1)
  5. categories: (2)
  6. - key: <category_name>
  7. value: <category_value>
  8. cluster: (3)
  9. type: uuid
  10. uuid: <cluster_uuid>
  11. credentialsSecret:
  12. name: nutanix-credentials (4)
  13. image: (5)
  14. name: <cluster_id>-rhcos
  15. type: name
  16. kind: NutanixMachineProviderConfig (6)
  17. memorySize: 16Gi (7)
  18. metadata:
  19. creationTimestamp: null
  20. project: (8)
  21. type: name
  22. name: <project_name>
  23. subnets: (9)
  24. - type: uuid
  25. uuid: <subnet_uuid>
  26. systemDiskSize: 120Gi (10)
  27. userDataSecret:
  28. name: master-user-data (11)
  29. vcpuSockets: 8 (12)
  30. vcpusPerSocket: 1 (13)
1Specifies the boot type that the control plane machines use. For more information about boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment. Valid values are Legacy, SecureBoot, or UEFI. The default is Legacy.

You must use the Legacy boot type in OKD 4.14.

2Specifies one or more Nutanix Prism categories to apply to control plane machines. This stanza requires key and value parameters for a category key-value pair that exists in Prism Central. For more information about categories, see Category management.
3Specifies a Nutanix Prism Element cluster configuration. In this example, the cluster type is uuid, so there is a uuid stanza.
4Specifies the secret name for the cluster. Do not change this value.
5Specifies the image that was used to create the disk.
6Specifies the cloud provider platform type. Do not change this value.
7Specifies the memory allocated for the control plane machines.
8Specifies the Nutanix project that you use for your cluster. In this example, the project type is name, so there is a name stanza.
9Specifies a subnet configuration. In this example, the subnet type is uuid, so there is a uuid stanza.
10Specifies the VM disk size for the control plane machines.
11Specifies the control plane user data secret. Do not change this value.
12Specifies the number of vCPU sockets allocated for the control plane machines.
13Specifies the number of vCPUs for each control plane vCPU socket.

Sample YAML for configuring VMware vSphere clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippet shows a provider specification configuration for a VMware vSphere cluster.

Sample vSphere provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Sample vSphere providerSpec values

  1. providerSpec:
  2. value:
  3. apiVersion: machine.openshift.io/v1beta1
  4. credentialsSecret:
  5. name: vsphere-cloud-credentials (1)
  6. diskGiB: 120 (2)
  7. kind: VSphereMachineProviderSpec (3)
  8. memoryMiB: 16384 (4)
  9. metadata:
  10. creationTimestamp: null
  11. network: (5)
  12. devices:
  13. - networkName: <vm_network_name>
  14. numCPUs: 4 (6)
  15. numCoresPerSocket: 4 (7)
  16. snapshot: ""
  17. template: <vm_template_name> (8)
  18. userDataSecret:
  19. name: master-user-data (9)
  20. workspace:
  21. datacenter: <vcenter_datacenter_name> (10)
  22. datastore: <vcenter_datastore_name> (11)
  23. folder: <path_to_vcenter_vm_folder> (12)
  24. resourcePool: <vsphere_resource_pool> (13)
  25. server: <vcenter_server_ip> (14)
1Specifies the secret name for the cluster. Do not change this value.
2Specifies the VM disk size for the control plane machines.
3Specifies the cloud provider platform type. Do not change this value.
4Specifies the memory allocated for the control plane machines.
5Specifies the network on which the control plane is deployed.
6Specifies the number of CPUs allocated for the control plane machines.
7Specifies the number of cores for each control plane CPU.
8Specifies the vSphere VM template to use, such as user-5ddjd-rhcos.
9Specifies the control plane user data secret. Do not change this value.
10Specifies the vCenter Datacenter for the control plane.
11Specifies the vCenter Datastore for the control plane.
12Specifies the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd.
13Specifies the vSphere resource pool for your VMs.
14Specifies the vCenter server IP or fully qualified domain name.

Sample YAML for configuring OpenStack clusters

Some sections of the control plane machine set CR are provider-specific. The following example YAML snippets show provider specification and failure domain configurations for an OpenStack cluster.

Sample OpenStack provider specification

When you create a control plane machine set for an existing cluster, the provider specification must match the providerSpec configuration in the control plane machine custom resource (CR) that is created by the installation program.

Sample OpenStack providerSpec values

  1. providerSpec:
  2. value:
  3. apiVersion: machine.openshift.io/v1alpha1
  4. cloudName: openstack
  5. cloudsSecret:
  6. name: openstack-cloud-credentials (1)
  7. namespace: openshift-machine-api
  8. flavor: m1.xlarge (2)
  9. image: ocp1-2g2xs-rhcos
  10. kind: OpenstackProviderSpec (3)
  11. metadata:
  12. creationTimestamp: null
  13. networks:
  14. - filter: {}
  15. subnets:
  16. - filter:
  17. name: ocp1-2g2xs-nodes
  18. tags: openshiftClusterID=ocp1-2g2xs
  19. securityGroups:
  20. - filter: {}
  21. name: ocp1-2g2xs-master (4)
  22. serverGroupName: ocp1-2g2xs-master
  23. serverMetadata:
  24. Name: ocp1-2g2xs-master
  25. openshiftClusterID: ocp1-2g2xs
  26. tags:
  27. - openshiftClusterID=ocp1-2g2xs
  28. trunk: true
  29. userDataSecret:
  30. name: master-user-data
1The secret name for the cluster. Do not change this value.
2The OpenStack flavor type for the control plane.
3The OpenStack cloud provider platform type. Do not change this value.
4The control plane machines security group.

Sample OpenStack failure domain configuration

The control plane machine set concept of a failure domain is analogous to existing OpenStack concept of an availability zone. The ControlPlaneMachineSet CR spreads control plane machines across multiple failure domains when possible.

The following example demonstrates the use of multiple Nova availability zones as well as Cinder availability zones.

Sample OpenStack failure domain values

  1. failureDomains:
  2. platform: OpenStack
  3. openstack:
  4. - availabilityZone: nova-az0
  5. rootVolume:
  6. availabilityZone: cinder-az0
  7. - availabilityZone: nova-az1
  8. rootVolume:
  9. availabilityZone: cinder-az1
  10. - availabilityZone: nova-az2
  11. rootVolume:
  12. availabilityZone: cinder-az2