Creating a compute machine set on GCP

You can create a different compute machine set to serve a specific purpose in your OKD cluster on Google Cloud Platform (GCP). For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.

This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational.

Machine API overview

The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OKD resources.

For OKD 4.12 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OKD 4.12 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure.

The two primary resources are:

Machines

A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a worker node on Amazon Web Services (AWS) might define a specific machine type and required metadata.

Machine sets

MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need.

Control plane machines cannot be managed by compute machine sets.

Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines.

For more information, see “Managing control plane machines”.

The following custom resources add more capabilities to your cluster:

Machine autoscaler

The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes.

The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object.

Cluster autoscaler

This resource is based on the upstream cluster autoscaler project. In the OKD implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways:

  • Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU

  • Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods

  • Set the scaling policy so that you can scale up nodes but not scale them down

Machine health check

The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine.

In OKD version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OKD version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster.

Sample YAML for a compute machine set custom resource on GCP

This sample YAML defines a compute machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/<role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. name: <infrastructure_id>-w-a
  7. namespace: openshift-machine-api
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. machine.openshift.io/cluster-api-cluster: <infrastructure_id>
  13. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
  14. template:
  15. metadata:
  16. creationTimestamp: null
  17. labels:
  18. machine.openshift.io/cluster-api-cluster: <infrastructure_id>
  19. machine.openshift.io/cluster-api-machine-role: <role> (2)
  20. machine.openshift.io/cluster-api-machine-type: <role>
  21. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
  22. spec:
  23. metadata:
  24. labels:
  25. node-role.kubernetes.io/<role>: ""
  26. providerSpec:
  27. value:
  28. apiVersion: gcpprovider.openshift.io/v1beta1
  29. canIPForward: false
  30. credentialsSecret:
  31. name: gcp-cloud-credentials
  32. deletionProtection: false
  33. disks:
  34. - autoDelete: true
  35. boot: true
  36. image: <path_to_image> (3)
  37. labels: null
  38. sizeGb: 128
  39. type: pd-ssd
  40. gcpMetadata: (4)
  41. - key: <custom_metadata_key>
  42. value: <custom_metadata_value>
  43. kind: GCPMachineProviderSpec
  44. machineType: n1-standard-4
  45. metadata:
  46. creationTimestamp: null
  47. networkInterfaces:
  48. - network: <infrastructure_id>-network
  49. subnetwork: <infrastructure_id>-worker-subnet
  50. projectID: <project_name> (5)
  51. region: us-central1
  52. serviceAccounts:
  53. - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
  54. scopes:
  55. - https://www.googleapis.com/auth/cloud-platform
  56. tags:
  57. - <infrastructure_id>-worker
  58. userDataSecret:
  59. name: worker-user-data
  60. zone: us-central1-a
1For <infrastructure_id>, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
2For <node>, specify the node label to add.
3Specify the path to the image that is used in current compute machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:
  1. $ oc -n openshift-machine-api \
  2. -o jsonpath=’{.spec.template.spec.providerSpec.value.disks[0].image}{“\n”}’ \
  3. get machineset/<infrastructure_id>-worker-a

To use a GCP Marketplace image, specify the offer to use:

4Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata.
5For <project_name>, specify the name of the GCP project that you use for your cluster.

Creating a compute machine set

In addition to the ones created by the installation program, you can create your own compute machine sets to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OKD cluster.

  • Install the OpenShift CLI (oc).

  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

    1. If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster:

      1. $ oc get machinesets -n openshift-machine-api

      Example output

      1. NAME DESIRED CURRENT READY AVAILABLE AGE
      2. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
      3. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
      4. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
      5. agl030519-vplxk-worker-us-east-1d 0 0 55m
      6. agl030519-vplxk-worker-us-east-1e 0 0 55m
      7. agl030519-vplxk-worker-us-east-1f 0 0 55m
    2. Check values of a specific compute machine set:

      1. $ oc get machineset <machineset_name> -n \
      2. openshift-machine-api -o yaml

      Example output

      1. ...
      2. template:
      3. metadata:
      4. labels:
      5. machine.openshift.io/cluster-api-cluster: agl030519-vplxk (1)
      6. machine.openshift.io/cluster-api-machine-role: worker (2)
      7. machine.openshift.io/cluster-api-machine-type: worker
      8. machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
      1The cluster ID.
      2A default node label.
  2. Create the new MachineSet CR:

    1. $ oc create -f <file_name>.yaml
  3. View the list of compute machine sets:

    1. $ oc get machineset -n openshift-machine-api

    Example output

    1. NAME DESIRED CURRENT READY AVAILABLE AGE
    2. agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
    3. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
    4. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
    5. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
    6. agl030519-vplxk-worker-us-east-1d 0 0 55m
    7. agl030519-vplxk-worker-us-east-1e 0 0 55m
    8. agl030519-vplxk-worker-us-east-1f 0 0 55m

    When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again.

Configuring persistent disk types by using compute machine sets

You can configure the type of persistent disk that a compute machine set deploys machines on by editing the compute machine set YAML file.

For more information about persistent disk types, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about persistent disks.

Procedure

  1. In a text editor, open the YAML file for an existing compute machine set or create a new one.

  2. Edit the following line under the providerSpec field:

    1. providerSpec:
    2. value:
    3. disks:
    4. type: <pd-disk-type> (1)
    1Specify the disk persistent type. Valid values are pd-ssd, pd-standard, and pd-balanced. The default value is pd-standard.

Verification

  • On the Google Cloud console, review the details for a machine deployed by the compute machine set and verify that the Type field matches the configured disk type.

Machine sets that deploy machines as preemptible VM instances

You can save on costs by creating a compute machine set running on GCP that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances utilize excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.

GCP Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur in 30 seconds. OKD begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED state by Compute Engine.

Interruptions can occur when using preemptible VM instances for the following reasons:

  • There is a system or maintenance event

  • The supply of preemptible VM instances decreases

  • The instance reaches the end of the allotted 24-hour period for preemptible VM instances

When GCP terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a preemptible VM instance.

Creating preemptible VM instances by using compute machine sets

You can launch a preemptible VM instance on GCP by adding preemptible to your compute machine set YAML file.

Procedure

  • Add the following line under the providerSpec field:

    1. providerSpec:
    2. value:
    3. preemptible: true

    If preemptible is set to true, the machine is labelled as an interruptable-instance after the instance is launched.

Enabling customer-managed encryption keys for a compute machine set

Google Cloud Platform (GCP) Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer’s data. By default, Compute Engine encrypts this data by using Compute Engine keys.

You can enable encryption with a customer-managed key by using the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key.

If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern.

Procedure

  1. Run the following command with your KMS key name, key ring name, and location to allow a specific service account to use your KMS key and to grant the service account the correct IAM role:

    1. gcloud kms keys add-iam-policy-binding <key_name> \
    2. --keyring <key_ring_name> \
    3. --location <key_ring_location> \
    4. --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com” \
    5. --role roles/cloudkms.cryptoKeyEncrypterDecrypter
  2. Configure the encryption key under the providerSpec field in your compute machine set YAML file. For example:

    1. providerSpec:
    2. value:
    3. # ...
    4. disks:
    5. - type:
    6. # ...
    7. encryptionKey:
    8. kmsKey:
    9. name: machine-encryption-key (1)
    10. keyRing: openshift-encrpytion-ring (2)
    11. location: global (3)
    12. projectID: openshift-gcp-project (4)
    13. kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com (5)
    1The name of the customer-managed encryption key that is used for the disk encryption.
    2The name of the KMS key ring that the KMS key belongs to.
    3The GCP location in which the KMS key ring exists.
    4Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the compute machine set projectID in which the compute machine set was created is used.
    5Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used.

    After a new machine is created by using the updated providerSpec object configuration, the disk encryption key is encrypted with the KMS key.

Enabling GPU support for a compute machine set

Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OKD on GCP supports NVIDIA GPU models in the A2 and N1 machine series.

Table 1. Supported GPU configurations
Model nameGPU typeMachine types [1]

NVIDIA A100

nvidia-tesla-a100

  • a2-highgpu-1g

  • a2-highgpu-2g

  • a2-highgpu-4g

  • a2-highgpu-8g

  • a2-megagpu-16g

NVIDIA K80

nvidia-tesla-k80

  • n1-standard-1

  • n1-standard-2

  • n1-standard-4

  • n1-standard-8

  • n1-standard-16

  • n1-standard-32

  • n1-standard-64

  • n1-standard-96

  • n1-highmem-2

  • n1-highmem-4

  • n1-highmem-8

  • n1-highmem-16

  • n1-highmem-32

  • n1-highmem-64

  • n1-highmem-96

  • n1-highcpu-2

  • n1-highcpu-4

  • n1-highcpu-8

  • n1-highcpu-16

  • n1-highcpu-32

  • n1-highcpu-64

  • n1-highcpu-96

NVIDIA P100

nvidia-tesla-p100

NVIDIA P4

nvidia-tesla-p4

NVIDIA T4

nvidia-tesla-t4

NVIDIA V100

nvidia-tesla-v100

  1. For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the GCP Compute Engine documentation about N1 machine series, A2 machine series, and GPU regions and zones availability.

You can define which supported GPU to use for an instance by using the Machine API.

You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators.

GPUs for graphics workloads are not supported.

Procedure

  1. In a text editor, open the YAML file for an existing compute machine set or create a new one.

  2. Specify a GPU configuration under the providerSpec field in your compute machine set YAML file. See the following examples of valid configurations:

    Example configuration for the A2 machine series:

    1. providerSpec:
    2. value:
    3. machineType: a2-highgpu-1g (1)
    4. onHostMaintenance: Terminate (2)
    5. restartPolicy: Always (3)
    1Specify the machine type. Ensure that the machine type is included in the A2 machine series.
    2When using GPU support, you must set onHostMaintenance to Terminate.
    3Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never.

    Example configuration for the N1 machine series:

    1. providerSpec:
    2. value:
    3. gpus:
    4. - count: 1 (1)
    5. type: nvidia-tesla-p100 (2)
    6. machineType: n1-standard-1 (3)
    7. onHostMaintenance: Terminate (4)
    8. restartPolicy: Always (5)
    1Specify the number of GPUs to attach to the machine.
    2Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible.
    3Specify the machine type. Ensure that the machine type and GPU type are compatible.
    4When using GPU support, you must set onHostMaintenance to Terminate.
    5Specify the restart policy for machines deployed by the compute machine set. Allowed values are Always or Never.

Adding a GPU node to an existing OKD cluster

You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the GCP cloud provider.

The following table lists the validated instance types:

Instance typeNVIDIA GPU acceleratorMaximum number of GPUsArchitecture

a2-highgpu-1g

A100

1

x86

n1-standard-4

T4

1

x86

Procedure

  1. Make a copy of an existing MachineSet.

  2. In the new copy, change the machine set name in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset.

  3. Change the instance type to add the following two lines to the newly copied MachineSet:

    1. machineType: a2-highgpu-1g
    2. onHostMaintenance: Terminate

    Example a2-highgpu-1g.json file

    1. {
    2. "apiVersion": "machine.openshift.io/v1beta1",
    3. "kind": "MachineSet",
    4. "metadata": {
    5. "annotations": {
    6. "machine.openshift.io/GPU": "0",
    7. "machine.openshift.io/memoryMb": "16384",
    8. "machine.openshift.io/vCPU": "4"
    9. },
    10. "creationTimestamp": "2023-01-13T17:11:02Z",
    11. "generation": 1,
    12. "labels": {
    13. "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p"
    14. },
    15. "name": "myclustername-2pt9p-worker-gpu-a",
    16. "namespace": "openshift-machine-api",
    17. "resourceVersion": "20185",
    18. "uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd"
    19. },
    20. "spec": {
    21. "replicas": 1,
    22. "selector": {
    23. "matchLabels": {
    24. "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p",
    25. "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
    26. }
    27. },
    28. "template": {
    29. "metadata": {
    30. "labels": {
    31. "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p",
    32. "machine.openshift.io/cluster-api-machine-role": "worker",
    33. "machine.openshift.io/cluster-api-machine-type": "worker",
    34. "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
    35. }
    36. },
    37. "spec": {
    38. "lifecycleHooks": {},
    39. "metadata": {},
    40. "providerSpec": {
    41. "value": {
    42. "apiVersion": "machine.openshift.io/v1beta1",
    43. "canIPForward": false,
    44. "credentialsSecret": {
    45. "name": "gcp-cloud-credentials"
    46. },
    47. "deletionProtection": false,
    48. "disks": [
    49. {
    50. "autoDelete": true,
    51. "boot": true,
    52. "image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64",
    53. "labels": null,
    54. "sizeGb": 128,
    55. "type": "pd-ssd"
    56. }
    57. ],
    58. "kind": "GCPMachineProviderSpec",
    59. "machineType": "a2-highgpu-1g",
    60. "onHostMaintenance": "Terminate",
    61. "metadata": {
    62. "creationTimestamp": null
    63. },
    64. "networkInterfaces": [
    65. {
    66. "network": "myclustername-2pt9p-network",
    67. "subnetwork": "myclustername-2pt9p-worker-subnet"
    68. }
    69. ],
    70. "preemptible": true,
    71. "projectID": "myteam",
    72. "region": "us-central1",
    73. "serviceAccounts": [
    74. {
    75. "email": "myclustername-2pt9p-w@myteam.iam.gserviceaccount.com",
    76. "scopes": [
    77. "https://www.googleapis.com/auth/cloud-platform"
    78. ]
    79. }
    80. ],
    81. "tags": [
    82. "myclustername-2pt9p-worker"
    83. ],
    84. "userDataSecret": {
    85. "name": "worker-user-data"
    86. },
    87. "zone": "us-central1-a"
    88. }
    89. }
    90. }
    91. }
    92. },
    93. "status": {
    94. "availableReplicas": 1,
    95. "fullyLabeledReplicas": 1,
    96. "observedGeneration": 1,
    97. "readyReplicas": 1,
    98. "replicas": 1
    99. }
    100. }
  4. View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific GCP region and OKD role.

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.25.4+77bec7a
    3. myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.25.4+77bec7a
    4. myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.25.4+77bec7a
    5. myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.25.4+77bec7a
    6. myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.25.4+77bec7a
    7. myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.25.4+77bec7a
    8. myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.25.4+77bec7a
  5. View the machines and machine sets that exist in the openshift-machine-api namespace by running the following command. Each compute machine set is associated with a different availability zone within the GCP region. The installer automatically load balances compute machines across availability zones.

    1. $ oc get machinesets -n openshift-machine-api

    Example output

    1. NAME DESIRED CURRENT READY AVAILABLE AGE
    2. myclustername-2pt9p-worker-a 1 1 1 1 8h
    3. myclustername-2pt9p-worker-b 1 1 1 1 8h
    4. myclustername-2pt9p-worker-c 1 1 8h
    5. myclustername-2pt9p-worker-f 0 0 8h
  6. View the machines that exist in the openshift-machine-api namespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone.

    1. $ oc get machines -n openshift-machine-api | grep worker

    Example output

    1. myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h
    2. myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h
    3. myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h
  7. Make a copy of one of the existing compute MachineSet definitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.

    1. $ oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>
  8. Edit the JSON file to make the following changes to the new MachineSet definition:

    • Rename the machine set name by inserting the substring gpu in metadata.name and in both instances of machine.openshift.io/cluster-api-machineset.

    • Change the machineType of the new MachineSet definition to a2-highgpu-1g, which includes an NVIDIA A100 GPU.

      1. jq .spec.template.spec.providerSpec.value.machineType ocp_4.12_machineset-a2-highgpu-1g.json
      2. "a2-highgpu-1g"

      The <output_file.json> file is saved as ocp_4.12_machineset-a2-highgpu-1g.json.

  9. Update the following fields in ocp_4.12_machineset-a2-highgpu-1g.json:

    • Change .metadata.name to a name containing gpu.

    • Change .spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.

    • Change .spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"] to match the new .metadata.name.

    • Change .spec.template.spec.providerSpec.value.MachineType to a2-highgpu-1g.

    • Add the following line under machineType: `“onHostMaintenance”: “Terminate”. For example:

      1. "machineType": "a2-highgpu-1g",
      2. "onHostMaintenance": "Terminate",
  10. To verify your changes, perform a diff of the original compute definition and the new GPU-enabled node definition by running the following command:

    1. $ oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.12_machineset-a2-highgpu-1g.json -

    Example output

    1. 15c15
    2. < "name": "myclustername-2pt9p-worker-gpu-a",
    3. ---
    4. > "name": "myclustername-2pt9p-worker-a",
    5. 25c25
    6. < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
    7. ---
    8. > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a"
    9. 34c34
    10. < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a"
    11. ---
    12. > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a"
    13. 59,60c59
    14. < "machineType": "a2-highgpu-1g",
    15. < "onHostMaintenance": "Terminate",
    16. ---
    17. > "machineType": "n2-standard-4",
  11. Create the GPU-enabled compute machine set from the definition file by running the following command:

    1. $ oc create -f ocp_4.12_machineset-a2-highgpu-1g.json

    Example output

    1. machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created

Verification

  1. View the machine set you created by running the following command:

    1. $ oc -n openshift-machine-api get machinesets | grep gpu

    The MachineSet replica count is set to 1 so a new Machine object is created automatically.

    Example output

    1. myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m
  2. View the Machine object that the machine set created by running the following command:

    1. $ oc -n openshift-machine-api get machines | grep gpu

    Example output

    1. myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m

Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.

Deploying the Node Feature Discovery Operator

After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OKD.

Procedure

  1. Install the Node Feature Discovery Operator from OperatorHub in the OKD console.

  2. After installing the NFD Operator into OperatorHub, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the nfd-master and nfd-worker pods, one nfd-worker pod for each compute node, in the openshift-nfd namespace.

  3. Verify that the Operator is installed and running by running the following command:

    1. $ oc get pods -n openshift-nfd

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d
  4. Browse to the installed Oerator in the console and select Create Node Feature Discovery.

  5. Select Create to build a NFD custom resource. This creates NFD pods in the openshift-nfd namespace that poll the OKD nodes for hardware resources and catalogue them.

Verification

  1. After a successful build, verify that a NFD pod is running on each nodes by running the following command:

    1. $ oc get pods -n openshift-nfd

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d
    3. nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d
    4. nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d
    5. nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d

    The NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID 10de.

  2. View the NVIDIA GPU discovered by the NFD Operator by running the following command:

    1. $ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'

    Example output

    1. Roles: worker
    2. feature.node.kubernetes.io/pci-1013.present=true
    3. feature.node.kubernetes.io/pci-10de.present=true
    4. feature.node.kubernetes.io/pci-1d0f.present=true

    10de appears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.