Creating a machine set on Azure

You can create a different machine set to serve a specific purpose in your OKD cluster on Microsoft Azure. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational.

Machine API overview

The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OKD resources.

For OKD 4.10 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OKD 4.10 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.

The two primary resources are:

Machines

A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.

Machine sets

MachineSet resources are groups of machines. Machine sets are to machines as replica sets are to pods. If you need more machines or must scale them down, you change the replicas field on the machine set to meet your compute need.

The following custom resources add more capabilities to your cluster:

Machine autoscaler

The MachineAutoscaler resource automatically scales machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object.

Cluster autoscaler

This resource is based on the upstream cluster autoscaler project. In the OKD implementation, it is integrated with the Machine API by extending the machine set API. You can set cluster-wide scaling limits for resources such as cores, nodes, memory, GPU, and so on. You can set the priority so that the cluster prioritizes pods so that new nodes are not brought online for less important pods. You can also set the scaling policy so that you can scale up nodes but not scale them down.

Machine health check

The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.

In OKD version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OKD version 4.1, this process is easier. Each machine set is scoped to a single zone, so the installation program sends out machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.

Sample YAML for a machine set custom resource on Azure

This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. machine.openshift.io/cluster-api-machine-role: <role> (2)
  7. machine.openshift.io/cluster-api-machine-type: <role> (2)
  8. name: <infrastructure_id>-<role>-<region> (3)
  9. namespace: openshift-machine-api
  10. spec:
  11. replicas: 1
  12. selector:
  13. matchLabels:
  14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> (3)
  16. template:
  17. metadata:
  18. creationTimestamp: null
  19. labels:
  20. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  21. machine.openshift.io/cluster-api-machine-role: <role> (2)
  22. machine.openshift.io/cluster-api-machine-type: <role> (2)
  23. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> (3)
  24. spec:
  25. metadata:
  26. creationTimestamp: null
  27. labels:
  28. machine.openshift.io/cluster-api-machineset: <machineset_name> (4)
  29. node-role.kubernetes.io/<role>: "" (2)
  30. providerSpec:
  31. value:
  32. apiVersion: azureproviderconfig.openshift.io/v1beta1
  33. credentialsSecret:
  34. name: azure-cloud-credentials
  35. namespace: openshift-machine-api
  36. image:
  37. offer: ""
  38. publisher: ""
  39. resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> (1)
  40. sku: ""
  41. version: ""
  42. internalLoadBalancer: ""
  43. kind: AzureMachineProviderSpec
  44. location: <region> (5)
  45. managedIdentity: <infrastructure_id>-identity (1)
  46. metadata:
  47. creationTimestamp: null
  48. natRule: null
  49. networkResourceGroup: ""
  50. osDisk:
  51. diskSizeGB: 128
  52. managedDisk:
  53. storageAccountType: Premium_LRS
  54. osType: Linux
  55. publicIP: false
  56. publicLoadBalancer: ""
  57. resourceGroup: <infrastructure_id>-rg (1)
  58. sshPrivateKey: ""
  59. sshPublicKey: ""
  60. subnet: <infrastructure_id>-<role>-subnet (1) (2)
  61. userDataSecret:
  62. name: worker-user-data (2)
  63. vmSize: Standard_D4s_v3
  64. vnet: <infrastructure_id>-vnet (1)
  65. zone: "1" (6)
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster

You can obtain the subnet by running the following command:

  1. $ oc -n openshift-machine-api \
  2. -o jsonpath=’{.spec.template.spec.providerSpec.value.subnet}{“\n”}’ \
  3. get machineset/<infrastructure_id>-worker-centralus1

You can obtain the vnet by running the following command:

  1. $ oc -n openshift-machine-api \
  2. -o jsonpath=’{.spec.template.spec.providerSpec.value.vnet}{“\n”}’ \
  3. get machineset/<infrastructure_id>-worker-centralus1
2Specify the node label to add.
3Specify the infrastructure ID, node label, and region.
4Optional: Specify the machine set name to enable the use of availability sets. This setting only applies to new compute machines.
5Specify the region to place machines on.
6Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.

Creating a machine set

In addition to the ones created by the installation program, you can create your own machine sets to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OKD cluster.

  • Install the OpenShift CLI (oc).

  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

    1. If you are not sure which value to set for a specific field, you can check an existing machine set from your cluster:

      1. $ oc get machinesets -n openshift-machine-api

      Example output

      1. NAME DESIRED CURRENT READY AVAILABLE AGE
      2. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
      3. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
      4. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
      5. agl030519-vplxk-worker-us-east-1d 0 0 55m
      6. agl030519-vplxk-worker-us-east-1e 0 0 55m
      7. agl030519-vplxk-worker-us-east-1f 0 0 55m
    2. Check values of a specific machine set:

      1. $ oc get machineset <machineset_name> -n \
      2. openshift-machine-api -o yaml

      Example output

      1. ...
      2. template:
      3. metadata:
      4. labels:
      5. machine.openshift.io/cluster-api-cluster: agl030519-vplxk (1)
      6. machine.openshift.io/cluster-api-machine-role: worker (2)
      7. machine.openshift.io/cluster-api-machine-type: worker
      8. machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
      1The cluster ID.
      2A default node label.
  2. Create the new MachineSet CR:

    1. $ oc create -f <file_name>.yaml
  3. View the list of machine sets:

    1. $ oc get machineset -n openshift-machine-api

    Example output

    1. NAME DESIRED CURRENT READY AVAILABLE AGE
    2. agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
    3. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
    4. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
    5. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
    6. agl030519-vplxk-worker-us-east-1d 0 0 55m
    7. agl030519-vplxk-worker-us-east-1e 0 0 55m
    8. agl030519-vplxk-worker-us-east-1f 0 0 55m

    When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again.

Machine sets that deploy machines as Spot VMs

You can save on costs by creating a machine set running on Azure that deploys machines as non-guaranteed Spot VMs. Spot VMs utilize unused Azure capacity and are less expensive than standard VMs. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.

It is strongly recommended that control plane machines are not created on Spot VMs due to the increased likelihood of the instance being terminated. Manual intervention is required to replace a terminated control plane node.

Azure can terminate a Spot VM at any time. Azure gives a 30-second warning to the user when an interruption occurs. OKD begins to remove the workloads from the affected instances when Azure issues the termination warning.

Interruptions can occur when using Spot VMs for the following reasons:

  • The instance price exceeds your maximum price

  • The supply of Spot VMs decreases

  • Azure needs capacity back

When Azure terminates an instance, a termination handler running on the Spot VM node deletes the machine resource. To satisfy the machine set replicas quantity, the machine set creates a machine that requests a Spot VM.

Creating Spot VMs by using machine sets

You can launch a Spot VM on Azure by adding spotVMOptions to your machine set YAML file.

Procedure

  • Add the following line under the providerSpec field:

    1. providerSpec:
    2. value:
    3. spotVMOptions: {}

    You can optionally set the spotVMOptions.maxPrice field to limit the cost of the Spot VM. For example you can set maxPrice: '0.98765'. If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to -1 and charges up to the standard VM price.

    Azure caps Spot VM prices at the standard price. Azure will not evict an instance due to pricing if the instance is set with the default maxPrice. However, an instance can still be evicted due to capacity restrictions.

It is strongly recommended to use the default standard VM price as the maxPrice value and to not set the maximum price for Spot VMs.

Machine sets that deploy machines on Ephemeral OS disks

You can create a machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. This configuration therefore incurs no additional cost and provides lower latency for reading, writing, and reimaging.

Additional resources

Creating machines on Ephemeral OS disks by using machine sets

You can launch machines on Ephemeral OS disks on Azure by editing your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster.

Procedure

  1. Edit the custom resource (CR) by running the following command:

    1. $ oc edit machineset <machine-set-name>

    where <machine-set-name> is the machine set that you want to provision machines on Ephemeral OS disks.

  2. Add the following to the providerSpec field:

    1. providerSpec:
    2. value:
    3. ...
    4. osDisk:
    5. ...
    6. diskSettings: (1)
    7. ephemeralStorageLocation: Local (1)
    8. cachingType: ReadOnly (1)
    9. managedDisk:
    10. storageAccountType: Standard_LRS (2)
    11. ...
    1These lines enable the use of Ephemeral OS disks.
    2Ephemeral OS disks are only supported for VMs or scale set instances that use the Standard LRS storage account type.

    The implementation of Ephemeral OS disk support in OKD only supports the CacheDisk placement type. Do not change the placement configuration setting.

  3. Create a machine set using the updated configuration:

    1. $ oc create -f <machine-set-config>.yaml

Verification

  • On the Microsoft Azure portal, review the Overview page for a machine deployed by the machine set, and verify that the Ephemeral OS disk field is set to OS cache placement.

Enabling customer-managed encryption keys for a machine set

You can supply an encryption key to Azure to encrypt data on managed disks at rest. You can enable server-side encryption with customer-managed keys by using the Machine API.

An Azure Key Vault, a disk encryption set, and an encryption key are required to use a customer-managed key. The disk encryption set must preside in a resource group where the Cloud Credential Operator (CCO) has granted permissions. If not, an additional reader role is required to be granted on the disk encryption set.

Prerequisites

Procedure

  • Configure the disk encryption set under the providerSpec field in your machine set YAML file. For example:

    1. ...
    2. providerSpec:
    3. value:
    4. ...
    5. osDisk:
    6. diskSizeGB: 128
    7. managedDisk:
    8. diskEncryptionSet:
    9. id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name>
    10. storageAccountType: Premium_LRS
    11. ...

Additional resources

Accelerated Networking for Microsoft Azure VMs

Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. This enhances network performance. This feature can be enabled during or after installation.

Limitations

Consider the following limitations when deciding whether to use Accelerated Networking:

  • Accelerated Networking is only supported on clusters where the Machine API is operational.

  • Although the minimum requirement for an Azure worker node is two vCPUs, Accelerated Networking requires an Azure VM size that includes at least four vCPUs. To satisfy this requirement, you can change the value of vmSize in your machine set. For information about Azure VM sizes, see Microsoft Azure documentation.

  • When this feature is enabled on an existing Azure cluster, only newly provisioned nodes are affected. Currently running nodes are not reconciled. To enable the feature on all nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas.

Additional resources

Enabling Accelerated Networking on an existing Microsoft Azure cluster

You can enable Accelerated Networking on Azure by adding acceleratedNetworking to your machine set YAML file.

Prerequisites

  • Have an existing Microsoft Azure cluster where the Machine API is operational.

Procedure

  1. List the machine sets in your cluster by running the following command:

    1. $ oc get machinesets -n openshift-machine-api

    The machine sets are listed in the form of <cluster-id>-worker-<region>.

    Example output

    1. NAME DESIRED CURRENT READY AVAILABLE AGE
    2. jmywbfb-8zqpx-worker-centralus1 1 1 1 1 15m
    3. jmywbfb-8zqpx-worker-centralus2 1 1 1 1 15m
    4. jmywbfb-8zqpx-worker-centralus3 1 1 1 1 15m
  2. For each machine set:

    1. Edit the custom resource (CR) by running the following command:

      1. $ oc edit machineset <machine-set-name>
    2. Add the following to the providerSpec field:

      1. providerSpec:
      2. value:
      3. ...
      4. acceleratedNetworking: true (1)
      5. ...
      6. vmSize: <azure-vm-size> (2)
      7. ...
      1This line enables Accelerated Networking.
      2Specify an Azure VM size that includes at least four vCPUs. For information about VM sizes, see Microsoft Azure documentation.
  3. To enable the feature on currently running nodes, you must replace each existing machine. This can be done for each machine individually, or by scaling the replicas down to zero, and then scaling back up to your desired number of replicas.

Verification

  • On the Microsoft Azure portal, review the Networking settings page for a machine provisioned by the machine set, and verify that the Accelerated networking field is set to Enabled.

Additional resources