Creating a compute machine set on oVirt

You can create a different compute machine set to serve a specific purpose in your OKD cluster on oVirt. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational.

Machine API overview

The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OKD resources.

For OKD 4.12 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OKD 4.12 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.

The two primary resources are:

Machines

A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.

Machine sets

MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need.

Control plane machines cannot be managed by compute machine sets.

Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines.

For more information, see “Managing control plane machines”.

The following custom resources add more capabilities to your cluster:

Machine autoscaler

The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.

The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object.

Cluster autoscaler

This resource is based on the upstream cluster autoscaler project. In the OKD implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:

  • Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU

  • Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods

  • Set the scaling policy so that you can scale up nodes but not scale them down

Machine health check

The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.

In OKD version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OKD version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.

Sample YAML for a compute machine set custom resource on oVirt

This sample YAML defines a compute machine set that runs on oVirt and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. machine.openshift.io/cluster-api-machine-role: <role> (2)
  7. machine.openshift.io/cluster-api-machine-type: <role> (2)
  8. name: <infrastructure_id>-<role> (3)
  9. namespace: openshift-machine-api
  10. spec:
  11. replicas: <number_of_replicas> (4)
  12. Selector: (5)
  13. matchLabels:
  14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
  16. template:
  17. metadata:
  18. labels:
  19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  20. machine.openshift.io/cluster-api-machine-role: <role> (2)
  21. machine.openshift.io/cluster-api-machine-type: <role> (2)
  22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
  23. spec:
  24. metadata:
  25. labels:
  26. node-role.kubernetes.io/<role>: "" (2)
  27. providerSpec:
  28. value:
  29. apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1
  30. cluster_id: <ovirt_cluster_id> (6)
  31. template_name: <ovirt_template_name> (7)
  32. sparse: <boolean_value> (8)
  33. format: <raw_or_cow> (9)
  34. cpu: (10)
  35. sockets: <number_of_sockets> (11)
  36. cores: <number_of_cores> (12)
  37. threads: <number_of_threads> (13)
  38. memory_mb: <memory_size> (14)
  39. guaranteed_memory_mb: <memory_size> (15)
  40. os_disk: (16)
  41. size_gb: <disk_size> (17)
  42. storage_domain_id: <storage_domain_UUID> (18)
  43. network_interfaces: (19)
  44. vnic_profile_id: <vnic_profile_id> (20)
  45. credentialsSecret:
  46. name: ovirt-credentials (21)
  47. kind: OvirtMachineProviderSpec
  48. type: <workload_type> (22)
  49. auto_pinning_policy: <auto_pinning_policy> (23)
  50. hugepages: <hugepages> (24)
  51. affinityGroupsNames:
  52. - compute (25)
  53. userDataSecret:
  54. name: worker-user-data
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
2Specify the node label to add.
3Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters.
4Specify the number of machines to create.
5Selector for the machines.
6Specify the UUID for the oVirt cluster to which this VM instance belongs.
7Specify the oVirt VM template to use to create the machine.
8Setting this option to false enables preallocation of disks. The default is true. Setting sparse to true with format set to raw is not available for block storage domains. The raw format writes the entire virtual disk to the underlying physical disk.
9Can be set to cow or raw. The default is cow. The cow format is optimized for virtual machines.

Preallocating disks on file storage domains writes zeroes to the file. This might not actually preallocate disks depending on the underlying storage.

10Optional: The CPU field contains the CPU configuration, including sockets, cores, and threads.
11Optional: Specify the number of sockets for a VM.
12Optional: Specify the number of cores per socket.
13Optional: Specify the number of threads per core.
14Optional: Specify the size of a VM’s memory in MiB.
15Optional: Specify the size of a virtual machine’s guaranteed memory in MiB. This is the amount of memory that is guaranteed not to be drained by the ballooning mechanism. For more information, see Memory Ballooning and Optimization Settings Explained.

If you are using a version earlier than oVirt 4.4.8, see Guaranteed memory requirements for OpenShift on Red Hat Virtualization clusters.

16Optional: Root disk of the node.
17Optional: Specify the size of the bootable disk in GiB.
18Optional: Specify the UUID of the storage domain for the compute node’s disks. If none is provided, the compute node is created on the same storage domain as the control nodes. (default)
19Optional: List of the network interfaces of the VM. If you include this parameter, OKD discards all network interfaces from the template and creates new ones.
20Optional: Specify the vNIC profile ID.
21Specify the name of the secret object that holds the oVirt credentials.
22Optional: Specify the workload type for which the instance is optimized. This value affects the oVirt VM parameter. Supported values: desktop, server (default), high_performance. high_performance improves performance on the VM. Limitations exist, for example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide.
23Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none, resize_and_pin. For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide.
24Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576. For more information, see Configuring Huge Pages in the Virtual Machine Management Guide.
25Optional: A list of affinity group names to be applied to the VMs. The affinity groups must exist in oVirt.

Because oVirt uses a template when creating a VM, if you do not specify a value for an optional parameter, oVirt uses the value for that parameter that is specified in the template.

Creating a compute machine set

In addition to the ones created by the installation program, you can create your own compute machine sets to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OKD cluster.

  • Install the OpenShift CLI (oc).

  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

    1. If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster:

      1. $ oc get machinesets -n openshift-machine-api

      Example output

      1. NAME DESIRED CURRENT READY AVAILABLE AGE
      2. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
      3. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
      4. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
      5. agl030519-vplxk-worker-us-east-1d 0 0 55m
      6. agl030519-vplxk-worker-us-east-1e 0 0 55m
      7. agl030519-vplxk-worker-us-east-1f 0 0 55m
    2. Check values of a specific compute machine set:

      1. $ oc get machineset <machineset_name> -n \
      2. openshift-machine-api -o yaml

      Example output

      1. ...
      2. template:
      3. metadata:
      4. labels:
      5. machine.openshift.io/cluster-api-cluster: agl030519-vplxk (1)
      6. machine.openshift.io/cluster-api-machine-role: worker (2)
      7. machine.openshift.io/cluster-api-machine-type: worker
      8. machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
      1The cluster ID.
      2A default node label.
  2. Create the new MachineSet CR:

    1. $ oc create -f <file_name>.yaml
  3. View the list of compute machine sets:

    1. $ oc get machineset -n openshift-machine-api

    Example output

    1. NAME DESIRED CURRENT READY AVAILABLE AGE
    2. agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
    3. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
    4. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
    5. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
    6. agl030519-vplxk-worker-us-east-1d 0 0 55m
    7. agl030519-vplxk-worker-us-east-1e 0 0 55m
    8. agl030519-vplxk-worker-us-east-1f 0 0 55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.