Creating a compute machine set on AWS

You can create a different compute machine set to serve a specific purpose in your OKD cluster on Amazon Web Services (AWS). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.

Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.

To view the platform type for your cluster, run the following command:

  1. $ oc get infrastructure cluster -o jsonpath=’{.status.platform}’

Sample YAML for a compute machine set custom resource on AWS

This sample YAML defines a compute machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. name: <infrastructure_id>-<role>-<zone> (2)
  7. namespace: openshift-machine-api
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  13. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> (2)
  14. template:
  15. metadata:
  16. labels:
  17. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  18. machine.openshift.io/cluster-api-machine-role: <role> (3)
  19. machine.openshift.io/cluster-api-machine-type: <role> (3)
  20. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> (2)
  21. spec:
  22. metadata:
  23. labels:
  24. node-role.kubernetes.io/<role>: "" (3)
  25. providerSpec:
  26. value:
  27. ami:
  28. id: ami-046fe691f52a953f9 (4)
  29. apiVersion: machine.openshift.io/v1beta1
  30. blockDevices:
  31. - ebs:
  32. iops: 0
  33. volumeSize: 120
  34. volumeType: gp2
  35. credentialsSecret:
  36. name: aws-cloud-credentials
  37. deviceIndex: 0
  38. iamInstanceProfile:
  39. id: <infrastructure_id>-worker-profile (1)
  40. instanceType: m6i.large
  41. kind: AWSMachineProviderConfig
  42. placement:
  43. availabilityZone: <zone> (6)
  44. region: <region> (7)
  45. securityGroups:
  46. - filters:
  47. - name: tag:Name
  48. values:
  49. - <infrastructure_id>-worker-sg (1)
  50. subnet:
  51. filters:
  52. - name: tag:Name
  53. values:
  54. - <infrastructure_id>-private-<zone> (8)
  55. tags:
  56. - name: kubernetes.io/cluster/<infrastructure_id> (1)
  57. value: owned
  58. - name: <custom_tag_name> (5)
  59. value: <custom_tag_value> (5)
  60. userDataSecret:
  61. name: worker-user-data
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
2Specify the infrastructure ID, role node label, and zone.
3Specify the role node label to add.
4Specify a valid Fedora CoreOS (FCOS) Amazon Machine Image (AMI) for your AWS zone for your OKD nodes. If you want to use an AWS Marketplace image, you must complete the OKD subscription from the AWS Marketplace to obtain an AMI ID for your region.
  1. $ oc -n openshift-machine-api \
  2. -o jsonpath=’{.spec.template.spec.providerSpec.value.ami.id}{“\n”}’ \
  3. get machineset/<infrastructure_id>-<role>-<zone>
5Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:admin-email@example.com.

Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file.

6Specify the zone, for example, us-east-1a.
7Specify the region, for example, us-east-1.
8Specify the infrastructure ID and zone.

Creating a compute machine set

In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OKD cluster.

  • Install the OpenShift CLI (oc).

  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

  2. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.

    1. To list the compute machine sets in your cluster, run the following command:

      1. $ oc get machinesets -n openshift-machine-api

      Example output

      1. NAME DESIRED CURRENT READY AVAILABLE AGE
      2. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
      3. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
      4. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
      5. agl030519-vplxk-worker-us-east-1d 0 0 55m
      6. agl030519-vplxk-worker-us-east-1e 0 0 55m
      7. agl030519-vplxk-worker-us-east-1f 0 0 55m
    2. To view values of a specific compute machine set custom resource (CR), run the following command:

      1. $ oc get machineset <machineset_name> \
      2. -n openshift-machine-api -o yaml

      Example output

      1. apiVersion: machine.openshift.io/v1beta1
      2. kind: MachineSet
      3. metadata:
      4. labels:
      5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
      6. name: <infrastructure_id>-<role> (2)
      7. namespace: openshift-machine-api
      8. spec:
      9. replicas: 1
      10. selector:
      11. matchLabels:
      12. machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      13. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
      14. template:
      15. metadata:
      16. labels:
      17. machine.openshift.io/cluster-api-cluster: <infrastructure_id>
      18. machine.openshift.io/cluster-api-machine-role: <role>
      19. machine.openshift.io/cluster-api-machine-type: <role>
      20. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>
      21. spec:
      22. providerSpec: (3)
      23. ...
      1The cluster infrastructure ID.
      2A default node label.

      For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines.

      3The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider.
  3. Create a MachineSet CR by running the following command:

    1. $ oc create -f <file_name>.yaml
  4. If you need compute machine sets in other availability zones, repeat this process to create more compute machine sets.

Verification

  • View the list of compute machine sets by running the following command:

    1. $ oc get machineset -n openshift-machine-api

    Example output

    1. NAME DESIRED CURRENT READY AVAILABLE AGE
    2. agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
    3. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
    4. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
    5. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
    6. agl030519-vplxk-worker-us-east-1d 0 0 55m
    7. agl030519-vplxk-worker-us-east-1e 0 0 55m
    8. agl030519-vplxk-worker-us-east-1f 0 0 55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

Assigning machines to placement groups for Elastic Fabric Adapter instances by using machine sets

You can configure a machine set to deploy machines on Elastic Fabric Adapter (EFA) instances within an existing AWS placement group.

EFA instances do not require placement groups, and you can use placement groups for purposes other than configuring an EFA. This example uses both to demonstrate a configuration that can improve network performance for machines within the specified placement group.

Prerequisites

  • You created a placement group in the AWS console.

    Ensure that the rules and limitations for the type of placement group that you create are compatible with your intended use case.

Procedure

  1. In a text editor, open the YAML file for an existing machine set or create a new one.

  2. Edit the following lines under the providerSpec field:

    1. apiVersion: machine.openshift.io/v1beta1
    2. kind: MachineSet
    3. # ...
    4. spec:
    5. template:
    6. spec:
    7. providerSpec:
    8. value:
    9. instanceType: <supported_instance_type> (1)
    10. networkInterfaceType: EFA (2)
    11. placement:
    12. availabilityZone: <zone> (3)
    13. region: <region> (4)
    14. placementGroupName: <placement_group> (5)
    15. # ...
    1Specify an instance type that supports EFAs.
    2Specify the EFA network interface type.
    3Specify the zone, for example, us-east-1a.
    4Specify the region, for example, us-east-1.
    5Specify the name of the existing AWS placement group to deploy machines in.

Verification

  • In the AWS console, find a machine that the machine set created and verify the following in the machine properties:

    • The placement group field has the value that you specified for the placementGroupName parameter in the machine set.

    • The interface type field indicates that it uses an EFA.

Machine set options for the Amazon EC2 Instance Metadata Service

You can use machine sets to create machines that use a specific version of the Amazon EC2 Instance Metadata Service (IMDS). Machine sets can create machines that allow the use of both IMDSv1 and IMDSv2 or machines that require the use of IMDSv2.

Using IMDSv2 is only supported on AWS clusters that were created with OKD version 4.7 or later.

To change the IMDS configuration for existing machines, edit the machine set YAML file that manages those machines. To deploy new compute machines with your preferred IMDS configuration, create a compute machine set YAML file with the appropriate values.

Before configuring a machine set to create machines that require IMDSv2, ensure that any workloads that interact with the AWS metadata service support IMDSv2.

Configuring IMDS by using machine sets

You can specify whether to require the use of IMDSv2 by adding or editing the value of metadataServiceOptions.authentication in the machine set YAML file for your machines.

Prerequisites

  • To use IMDSv2, your AWS cluster must have been created with OKD version 4.7 or later.

Procedure

  • Add or edit the following lines under the providerSpec field:

    1. providerSpec:
    2. value:
    3. metadataServiceOptions:
    4. authentication: Required (1)
    1To require IMDSv2, set the parameter value to Required. To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional. If no value is specified, both IMDSv1 and IMDSv2 are allowed.

Machine sets that deploy machines as Dedicated Instances

You can create a machine set running on AWS that deploys machines as Dedicated Instances. Dedicated Instances run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer. These Amazon EC2 instances are physically isolated at the host hardware level. The isolation of Dedicated Instances occurs even if the instances belong to different AWS accounts that are linked to a single payer account. However, other instances that are not dedicated can share hardware with Dedicated Instances if they belong to the same AWS account.

Instances with either public or dedicated tenancy are supported by the Machine API. Instances with public tenancy run on shared hardware. Public tenancy is the default tenancy. Instances with dedicated tenancy run on single-tenant hardware.

Creating Dedicated Instances by using machine sets

You can run a machine that is backed by a Dedicated Instance by using Machine API integration. Set the tenancy field in your machine set YAML file to launch a Dedicated Instance on AWS.

Procedure

  • Specify a dedicated tenancy under the providerSpec field:

    1. providerSpec:
    2. placement:
    3. tenancy: dedicated

Machine sets that deploy machines as Spot Instances

You can save on costs by creating a compute machine set running on AWS that deploys machines as non-guaranteed Spot Instances. Spot Instances utilize unused AWS EC2 capacity and are less expensive than On-Demand Instances. You can use Spot Instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.

AWS EC2 can terminate a Spot Instance at any time. AWS gives a two-minute warning to the user when an interruption occurs. OKD begins to remove the workloads from the affected instances when AWS issues the termination warning.

Interruptions can occur when using Spot Instances for the following reasons:

  • The instance price exceeds your maximum price

  • The demand for Spot Instances increases

  • The supply of Spot Instances decreases

When AWS terminates an instance, a termination handler running on the Spot Instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot Instance.

Creating Spot Instances by using compute machine sets

You can launch a Spot Instance on AWS by adding spotMarketOptions to your compute machine set YAML file.

Procedure

  • Add the following line under the providerSpec field:

    1. providerSpec:
    2. value:
    3. spotMarketOptions: {}

    You can optionally set the spotMarketOptions.maxPrice field to limit the cost of the Spot Instance. For example you can set maxPrice: '2.50'.

    If the maxPrice is set, this value is used as the hourly maximum spot price. If it is not set, the maximum price defaults to charge up to the On-Demand Instance price.

    It is strongly recommended to use the default On-Demand price as the maxPrice value and to not set the maximum price for Spot Instances.

Adding a GPU node to an existing OKD cluster

You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the AWS EC2 cloud provider.

For more information about the supported instance types, see the following NVIDIA documentation:

Procedure

  1. View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific AWS region and OKD role.

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.28.5
    3. ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5
    4. ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.28.5
    5. ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5
    6. ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.28.5
    7. ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.28.5
  2. View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the AWS region. The installer automatically load balances compute machines across availability zones.

    1. $ oc get machinesets -n openshift-machine-api

    Example output

    1. NAME DESIRED CURRENT READY AVAILABLE AGE
    2. preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h
    3. preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h
  3. View the machines that exist in the openshift-machine-api namespace by running the following command. At this time, there is only one compute machine per machine set, though a compute machine set could be scaled to add a node in a particular region and zone.

    1. $ oc get machines -n openshift-machine-api | grep worker

    Example output

    1. preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h
    2. preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h
    3. preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h
  4. Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.

    1. $ oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>
  5. Edit the JSON file and make the following changes to the new MachineSet definition:

    • Replace worker with gpu. This will be the name of the new machine set.

    • Change the instance type of the new MachineSet definition to g4dn, which includes an NVIDIA Tesla T4 GPU. To learn more about AWS g4dn instance types, see Accelerated Computing.

      1. $ jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json
      2. "g4dn.xlarge"

      The <output_file.json> file is saved as preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json.

  6. Update the following fields in preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json:

    • .metadata.name to a name containing gpu.

    • .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.

    • .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.

    • .spec.template.spec.providerSpec.value.instanceType to g4dn.xlarge.

  7. To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command:

    1. $ oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -

    Example output

    1. 10c10
    2. < "name": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a",
    3. ---
    4. > "name": "preserve-dsoc12r4-ktjfc-worker-us-east-2a",
    5. 21c21
    6. < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a"
    7. ---
    8. > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a"
    9. 31c31
    10. < "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a"
    11. ---
    12. > "machine.openshift.io/cluster-api-machineset": "preserve-dsoc12r4-ktjfc-worker-us-east-2a"
    13. 60c60
    14. < "instanceType": "g4dn.xlarge",
    15. ---
    16. > "instanceType": "m5.xlarge",
  8. Create the GPU-enabled compute machine set from the definition by running the following command:

    1. $ oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json

    Example output

    1. machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created

Verification

  1. View the machine set you created by running the following command:

    1. $ oc -n openshift-machine-api get machinesets | grep gpu

    The MachineSet replica count is set to 1 so a new Machine object is created automatically.

    Example output

    1. preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s
  2. View the Machine object that the machine set created by running the following command:

    1. $ oc -n openshift-machine-api get machines | grep gpu

    Example output

    1. preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s

Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.

Deploying the Node Feature Discovery Operator

After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OKD.

Procedure

  1. Install the Node Feature Discovery Operator from OperatorHub in the OKD console.

  2. After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace.

  3. Verify that the Operator is installed and running by running the following command:

    1. $ oc get pods -n openshift-nfd

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d
  4. Browse to the installed Oerator in the console and select Create Node Feature Discovery.

  5. Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OKD nodes for hardware resources and catalogue them.

Verification

  1. After a successful build, verify that a NFD pod is running on each nodes by running the following command:

    1. $ oc get pods -n openshift-nfd

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d
    3. nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d
    4. nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d
    5. nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d

    The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de.

  2. View the NVIDIA GPU discovered by the NFD Operator by running the following command:

    1. $ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'

    Example output

    1. Roles: worker
    2. feature.node.kubernetes.io/pci-1013.present=true
    3. feature.node.kubernetes.io/pci-10de.present=true
    4. feature.node.kubernetes.io/pci-1d0f.present=true

    10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.