Managing control plane machines with the Control Plane Machine Set Operator

The Control Plane Machine Set Operator automates several essential aspects of control plane management.

Updating the control plane configuration

You can make changes to the configuration of the machines in the control plane by updating the specification in the control plane machine set custom resource (CR).

The Control Plane Machine Set Operator monitors the control plane machines and compares their configuration with the specification in the control plane machine set CR. When there is a discrepancy between the specification in the CR and the configuration of a control plane machine, the Operator marks that control plane machine for replacement.

For more information about the parameters in the CR, see “Control Plane Machine Set Operator configuration”.

Prerequisites

  • Your cluster has an activated and functioning Control Plane Machine Set Operator.

Procedure

  1. Edit your control plane machine set CR by running the following command:

    1. $ oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster
  2. Change the values of any fields that you want to update in your cluster configuration.

  3. Save your changes.

Next steps

  • For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration.

  • For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually.

Automatically updating the control plane configuration

You can use the RollingUpdate update strategy to automatically propagate changes to your control plane configuration.

For clusters that use the default RollingUpdate update strategy, the Operator creates a replacement control plane machine with the configuration that is specified in the CR. When the replacement control plane machine is ready, the Operator deletes the control plane machine that is marked for replacement. The replacement machine then joins the control plane.

If multiple control plane machines are marked for replacement, the Operator repeats this replacement process one machine at a time until each machine is replaced.

Testing changes to the control plane configuration

You can use the OnDelete update strategy to test changes to your control plane configuration. With this update strategy, you replace control plane machines manually. Manually replacing machines allows you to test changes to your configuration on a single machine before applying the changes more broadly.

For clusters that are configured to use the OnDelete update strategy, the Operator creates a replacement control plane machine when you delete an existing machine. When the replacement control plane machine is ready, the etcd Operator allows the existing machine to be deleted. The replacement machine then joins the control plane.

If multiple control plane machines are deleted, the Operator creates all of the required replacement machines simultaneously.

Enabling Amazon Web Services features for control plane machines

You can enable Amazon Web Services (AWS) features on control plane machines by changing the configuration of your control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.

Restricting the API server to private

After you deploy a cluster to Amazon Web Services (AWS), you can reconfigure the API server to use only the private zone.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Have access to the web console as a user with admin privileges.

Procedure

  1. In the web portal or console for your cloud provider, take the following actions:

    1. Locate and delete the appropriate load balancer component:

      • For AWS, delete the external load balancer. The API DNS entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.
    2. Delete the api.$clustername.$yourdomain DNS entry in the public zone.

  2. Remove the external load balancers by deleting the following lines in the control plane machine set custom resource:

    1. providerSpec:
    2. value:
    3. loadBalancers:
    4. - name: lk4pj-ext (1)
    5. type: network (1)
    6. - name: lk4pj-int
    7. type: network
    1Delete this line.

Changing the Amazon Web Services instance type by using a control plane machine set

You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the specification in the control plane machine set custom resource (CR).

Prerequisites

  • Your AWS cluster uses a control plane machine set.

Procedure

  1. Edit the following line under the providerSpec field:

    1. providerSpec:
    2. value:
    3. ...
    4. instanceType: <compatible_aws_instance_type> (1)
    1Specify a larger AWS instance type with the same base as the previous selection. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge.
  2. Save your changes.

Machine set options for the Amazon EC2 Instance Metadata Service

You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2.

To change the IMDS configuration for existing machines, edit the machine set YAML file that manages those machines.

Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2.

Configuring IMDS by using machine sets

You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines.

Procedure

  • Add or edit the following lines under the providerSpec field:

    1. providerSpec:
    2. value:
    3. metadataServiceOptions:
    4. authentication: Required (1)
    1To require IMDSv2, set the parameter value to Required. To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional. If no value is specified, both IMDSv1 and IMDSv2 are allowed.

Machine sets that deploy machines as Dedicated Instances

You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account.

Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware.

Creating Dedicated Instances by using machine sets

You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS.

Procedure

  • Specify a dedicated tenancy under the providerSpec field:

    1. providerSpec:
    2. placement:
    3. tenancy: dedicated

Enabling Microsoft Azure features for control plane machines

You can enable Microsoft Azure features on control plane machines by changing the configuration of your control plane machine set. When you save an update to the control plane machine set, the Control Plane Machine Set Operator updates the control plane machines according to your configured update strategy.

Restricting the API server to private

After you deploy a cluster to Microsoft Azure, you can reconfigure the API server to use only the private zone.

Prerequisites

  • Install the OpenShift CLI (oc).

  • Have access to the web console as a user with admin privileges.

Procedure

  1. In the web portal or console for your cloud provider, take the following actions:

    1. Locate and delete the appropriate load balancer component:

      • For Azure, delete the api-internal rule for the load balancer.
    2. Delete the api.$clustername.$yourdomain DNS entry in the public zone.

  2. Remove the external load balancers by deleting the following lines in the control plane machine set custom resource:

    1. providerSpec:
    2. value:
    3. loadBalancers:
    4. - name: lk4pj-ext (1)
    5. type: network (1)
    6. - name: lk4pj-int
    7. type: network
    1Delete this line.

Selecting an Azure Marketplace image

You can create a machine set running on Azure that deploys machines that use the Azure Marketplace offering. To use this offering, you must first obtain the Azure Marketplace image. When obtaining your image, consider the following:

  • While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher.

  • The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OKD are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image.

Installing images with the Azure marketplace is not supported on clusters with arm64 instances.

Prerequisites

  • You have installed the Azure CLI client (az).

  • Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.

Procedure

  1. Display all of the available OKD images by running one of the following commands:

    • North America:

      1. $ az vm image list --all --offer rh-ocp-worker --publisher redhat -o table

      Example output

      1. Offer Publisher Sku Urn Version
      2. ------------- -------------- ------------------ -------------------------------------------------------------- --------------
      3. rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100
      4. rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100
    • EMEA:

      1. $ az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table

      Example output

      1. Offer Publisher Sku Urn Version
      2. ------------- -------------- ------------------ -------------------------------------------------------------- --------------
      3. rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100
      4. rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100

    Regardless of the version of OKD that you install, the correct version of the Azure Marketplace image to use is 4.8. If required, your VMs are automatically upgraded as part of the installation process.

  2. Inspect the image for your offer by running one of the following commands:

    • North America:

      1. $ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      1. $ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  3. Review the terms of the offer by running one of the following commands:

    • North America:

      1. $ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      1. $ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  4. Accept the terms of the offering by running one of the following commands:

    • North America:

      1. $ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>
    • EMEA:

      1. $ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
  5. Record the image details of your offer, specifically the values for publisher, offer, sku, and version.

  6. Add the following parameters to the providerSpec section of your machine set YAML file using the image details for your offer:

    Sample providerSpec image values for Azure Marketplace machines

    1. providerSpec:
    2. value:
    3. image:
    4. offer: rh-ocp-worker
    5. publisher: redhat
    6. resourceID: ""
    7. sku: rh-ocp-worker
    8. type: MarketplaceWithPlan
    9. version: 4.8.2021122100

Enabling Azure boot diagnostics

You can enable boot diagnostics on Azure machines that your machine set creates.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  • Add the diagnostics configuration that is applicable to your storage type to the providerSpec field in your machine set YAML file:

    • For an Azure Managed storage account:

      1. providerSpec:
      2. diagnostics:
      3. boot:
      4. storageAccountType: AzureManaged (1)
      1Specifies an Azure Managed storage account.
    • For an Azure Unmanaged storage account:

      1. providerSpec:
      2. diagnostics:
      3. boot:
      4. storageAccountType: CustomerManaged (1)
      5. customerManaged:
      6. storageAccountURI: https://<storage-account>.blob.core.windows.net (2)
      1Specifies an Azure Unmanaged storage account.
      2Replace <storage-account> with the name of your storage account.

      Only the Azure Blob Storage data service is supported.

Verification

  • On the Microsoft Azure portal, review the Boot diagnostics page for a machine deployed by the machine set, and verify that you can see the serial logs for the machine.

Machine sets that deploy machines with ultra disks as data disks

You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads.

Additional resources

Creating machines with ultra disks by using machine sets

You can deploy machines with ultra disks on Azure by editing your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  1. Create a custom secret in the openshift-machine-api namespace using the master data secret by running the following command:

    1. $ oc -n openshift-machine-api \
    2. get secret <role>-user-data \ (1)
    3. --template='{{index .data.userData | base64decode}}' | jq > userData.txt (2)
    1Replace <role> with master.
    2Specify userData.txt as the name of the new custom secret.
  2. In a text editor, open the userData.txt file and locate the final } character in the file.

    1. On the immediately preceding line, add a ,.

    2. Create a new line after the , and add the following configuration details:

      1. "storage": {
      2. "disks": [ (1)
      3. {
      4. "device": "/dev/disk/azure/scsi1/lun0", (2)
      5. "partitions": [ (3)
      6. {
      7. "label": "lun0p1", (4)
      8. "sizeMiB": 1024, (5)
      9. "startMiB": 0
      10. }
      11. ]
      12. }
      13. ],
      14. "filesystems": [ (6)
      15. {
      16. "device": "/dev/disk/by-partlabel/lun0p1",
      17. "format": "xfs",
      18. "path": "/var/lib/lun0p1"
      19. }
      20. ]
      21. },
      22. "systemd": {
      23. "units": [ (7)
      24. {
      25. "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var/lib/lun0p1\nWhat=/dev/disk/by-partlabel/lun0p1\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", (8)
      26. "enabled": true,
      27. "name": "var-lib-lun0p1.mount"
      28. }
      29. ]
      30. }
      1The configuration details for the disk that you want to attach to a node as an ultra disk.
      2Specify the lun value that is defined in the dataDisks stanza of the machine set you are using. For example, if the machine set contains lun: 0, specify lun0. You can initialize multiple data disks by specifying multiple “disks” entries in this configuration file. If you specify multiple “disks” entries, ensure that the lun value for each matches the value in the machine set.
      3The configuration details for a new partition on the disk.
      4Specify a label for the partition. You might find it helpful to use hierarchical names, such as lun0p1 for the first partition of lun0.
      5Specify the total size in MiB of the partition.
      6Specify the filesystem to use when formatting a partition. Use the partition label to specify the partition.
      7Specify a systemd unit to mount the partition at boot. Use the partition label to specify the partition. You can create multiple partitions by specifying multiple “partitions” entries in this configuration file. If you specify multiple “partitions” entries, you must specify a systemd unit for each.
      8For Where, specify the value of storage.filesystems.path. For What, specify the value of storage.filesystems.device.
  3. Extract the disabling template value to a file called disableTemplating.txt by running the following command:

    1. $ oc -n openshift-machine-api get secret <role>-user-data \ (1)
    2. --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt
    1Replace <role> with master.
  4. Combine the userData.txt file and disableTemplating.txt file to create a data secret file by running the following command:

    1. $ oc -n openshift-machine-api create secret generic <role>-user-data-x5 \ (1)
    2. --from-file=userData=userData.txt \
    3. --from-file=disableTemplating=disableTemplating.txt
    1For <role>-user-data-x5, specify the name of the secret. Replace <role> with master.
  5. Edit your control plane machine set CR by running the following command:

    1. $ oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster
  6. Add the following lines in the positions indicated:

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: ControlPlaneMachineSet
    3. spec:
    4. template:
    5. spec:
    6. metadata:
    7. labels:
    8. disk: ultrassd (1)
    9. providerSpec:
    10. value:
    11. ultraSSDCapability: Enabled (2)
    12. dataDisks: (2)
    13. - nameSuffix: ultrassd
    14. lun: 0
    15. diskSizeGB: 4
    16. deletionPolicy: Delete
    17. cachingType: None
    18. managedDisk:
    19. storageAccountType: UltraSSD_LRS
    20. userDataSecret:
    21. name: <role>-user-data-x5 (3)
    1Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value.
    2These lines enable the use of ultra disks. For dataDisks, include the entire stanza.
    3Specify the user data secret created earlier. Replace <role> with master.
  7. Save your changes.

    • For clusters that use the default RollingUpdate update strategy, the Operator automatically propagates the changes to your control plane configuration.

    • For clusters that are configured to use the OnDelete update strategy, you must replace your control plane machines manually.

Verification

  1. Validate that the machines are created by running the following command:

    1. $ oc get machines

    The machines should be in the Running state.

  2. For a machine that is running and has a node attached, validate the partition by running the following command:

    1. $ oc debug node/<node-name> -- chroot /host lsblk

    In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with --. The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine.

Next steps

  • To use an ultra disk on the control plane, reconfigure your workload to use the control plane’s ultra disk mount point.

Troubleshooting resources for machine sets that enable ultra disks

Use the information in this section to understand and recover from issues you might encounter.

Incorrect ultra disk configuration

If an incorrect configuration of the ultraSSDCapability parameter is specified in the machine set, the machine provisioning fails.

For example, if the ultraSSDCapability parameter is set to Disabled, but an ultra disk is specified in the dataDisks parameter, the following error message appears:

  1. StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.
  • To resolve this issue, verify that your machine set configuration is correct.
Unsupported disk parameters

If a region, availability zone, or instance size that is not compatible with ultra disks is specified in the machine set, the machine provisioning fails. Check the logs for the following error message:

  1. failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="BadRequest" Message="Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>."
  • To resolve this issue, verify that you are using this feature in a supported environment and that your machine set configuration is correct.
Unable to delete disks

If the deletion of ultra disks as data disks is not working as expected, the machines are deleted and the data disks are orphaned. You must delete the orphaned disks manually if desired.

Enabling customer-managed encryption keys for a machine set

You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.

An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must be in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.

Prerequisites

Procedure

  • Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example:

    1. providerSpec:
    2. value:
    3. osDisk:
    4. diskSizeGB: 128
    5. managedDisk:
    6. diskEncryptionSet:
    7. id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name>
    8. storageAccountType: Premium_LRS

Additional resources

Accelerated Networking for Microsoft Azure VMs

Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled after installation.

Limitations

Consider the following limitations when deciding whether to use Accelerated Networking:

  • Accelerated Networking is only supported on clusters where the Machine API is operational.

  • Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation.

Enabling Accelerated Networking on an existing Microsoft Azure cluster

You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster where the Machine API is operational.

Procedure

  • Add the following to the providerSpec field:

    1. providerSpec:
    2. value:
    3. acceleratedNetworking: true (1)
    4. vmSize: <azure-vm-size> (2)
    1This line enables Accelerated Networking.
    2Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation.

Verification

  • On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled.