Creating infrastructure machine sets

This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational.

You can create a machine set to host only infrastructure components. You apply specific Kubernetes labels to these machines and then update the infrastructure components to run on only those machines. These infrastructure nodes are not counted toward the total number of subscriptions that are required to run the environment.

Unlike earlier versions of OKD, you cannot move the infrastructure components to the control plane machines (also known as the master machines). To move the components, you must create a new machine set.

OKD infrastructure components

The following infrastructure workloads do not incur OKD worker subscriptions:

  • Kubernetes and OKD control plane services that run on masters

  • The default router

  • The integrated container image registry

  • The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects

  • Cluster aggregated logging

  • Service brokers

  • Red Hat Quay

  • Red Hat OpenShift Container Storage

  • Red Hat Advanced Cluster Manager

Any node that runs any other container, pod, or component is a worker node that your subscription must cover.

Creating infrastructure machine sets for production environments

In a production deployment, deploy at least three machine sets to hold infrastructure components. Both the logging aggregation solution and the service mesh deploy Elasticsearch, and Elasticsearch requires three instances that are installed on different nodes. For high availability, deploy these nodes to different availability zones. Since you need different machine sets for each availability zone, create at least three machine sets.

Creating machine sets for different clouds

Use the sample machine set for your cloud.

Sample YAML for a machine set custom resource on AWS

This sample YAML defines a machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. name: <infrastructure_id>-infra-<zone> (2)
  7. namespace: openshift-machine-api
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  13. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> (2)
  14. template:
  15. metadata:
  16. labels:
  17. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  18. machine.openshift.io/cluster-api-machine-role: <infra> (3)
  19. machine.openshift.io/cluster-api-machine-type: <infra> (3)
  20. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> (2)
  21. spec:
  22. metadata:
  23. labels:
  24. node-role.kubernetes.io/infra: "" (3)
  25. taints: (4)
  26. - key: node-role.kubernetes.io/infra
  27. effect: NoSchedule
  28. providerSpec:
  29. value:
  30. ami:
  31. id: ami-046fe691f52a953f9 (5)
  32. apiVersion: awsproviderconfig.openshift.io/v1beta1
  33. blockDevices:
  34. - ebs:
  35. iops: 0
  36. volumeSize: 120
  37. volumeType: gp2
  38. credentialsSecret:
  39. name: aws-cloud-credentials
  40. deviceIndex: 0
  41. iamInstanceProfile:
  42. id: <infrastructure_id>-worker-profile (1)
  43. instanceType: m4.large
  44. kind: AWSMachineProviderConfig
  45. placement:
  46. availabilityZone: us-east-1a
  47. region: us-east-1
  48. securityGroups:
  49. - filters:
  50. - name: tag:Name
  51. values:
  52. - <infrastructure_id>-worker-sg (1)
  53. subnet:
  54. filters:
  55. - name: tag:Name
  56. values:
  57. - <infrastructure_id>-private-us-east-1a (1)
  58. tags:
  59. - name: kubernetes.io/cluster/<infrastructure_id> (1)
  60. value: owned
  61. userDataSecret:
  62. name: worker-user-data
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc -n openshift-machine-api \
  2. -o jsonpath=’{.spec.template.spec.providerSpec.value.ami.id}{“\n”}’ \
  3. get machineset/<infrastructure_id>-worker-us-east-1a
2Specify the infrastructure ID, <infra> node label, and zone.
3Specify the <infra> node label.
4Specify a taint to prevent user workloads from being scheduled on infra nodes.
5Specify a valid Fedora CoreOS (FCOS) AMI for your AWS zone for your OKD nodes.

Machine sets running on AWS support non-guaranteed Spot Instances. You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file.

Sample YAML for a machine set custom resource on Azure

This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in the centralus region and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. machine.openshift.io/cluster-api-machine-role: <infra> (2)
  7. machine.openshift.io/cluster-api-machine-type: <infra> (2)
  8. name: <infrastructure_id>-infra-<region> (3)
  9. namespace: openshift-machine-api
  10. spec:
  11. replicas: 1
  12. selector:
  13. matchLabels:
  14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
  16. template:
  17. metadata:
  18. creationTimestamp: null
  19. labels:
  20. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  21. machine.openshift.io/cluster-api-machine-role: <infra> (2)
  22. machine.openshift.io/cluster-api-machine-type: <infra> (2)
  23. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> (3)
  24. spec:
  25. metadata:
  26. creationTimestamp: null
  27. labels:
  28. node-role.kubernetes.io/infra: "" (2)
  29. taints: (4)
  30. - key: node-role.kubernetes.io/infra
  31. effect: NoSchedule
  32. providerSpec:
  33. value:
  34. apiVersion: azureproviderconfig.openshift.io/v1beta1
  35. credentialsSecret:
  36. name: azure-cloud-credentials
  37. namespace: openshift-machine-api
  38. image:
  39. offer: ""
  40. publisher: ""
  41. resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> (1)
  42. sku: ""
  43. version: ""
  44. internalLoadBalancer: ""
  45. kind: AzureMachineProviderSpec
  46. location: centralus
  47. managedIdentity: <infrastructure_id>-identity (1)
  48. metadata:
  49. creationTimestamp: null
  50. natRule: null
  51. networkResourceGroup: ""
  52. osDisk:
  53. diskSizeGB: 128
  54. managedDisk:
  55. storageAccountType: Premium_LRS
  56. osType: Linux
  57. publicIP: false
  58. publicLoadBalancer: ""
  59. resourceGroup: <infrastructure_id>-rg (1)
  60. sshPrivateKey: ""
  61. sshPublicKey: ""
  62. subnet: <infrastructure_id>-<role>-subnet (1) (2)
  63. userDataSecret:
  64. name: worker-user-data (2)
  65. vmSize: qeci-22538-vnet
  66. vnet: <infrastructure_id>-vnet (1)
  67. zone: "1" (5)
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster

You can obtain the subnet by running the following command:

  1. $ oc -n openshift-machine-api \
  2. -o jsonpath=’{.spec.template.spec.providerSpec.value.subnet}{“\n”}’ \
  3. get machineset/<infrastructure_id>-worker-centralus1

You can obtain the vnet by running the following command:

  1. $ oc -n openshift-machine-api \
  2. -o jsonpath=’{.spec.template.spec.providerSpec.value.vnet}{“\n”}’ \
  3. get machineset/<infrastructure_id>-worker-centralus1
2Specify the <infra> node label.
3Specify the infrastructure ID, <infra> node label, and region.
4Specify a taint to prevent user workloads from being scheduled on infra nodes.
5Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify.

Machine sets running on Azure support non-guaranteed Spot VMs. You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file.

Sample YAML for a machine set custom resource on GCP

This sample YAML defines a machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. name: <infrastructure_id>-w-a (1)
  7. namespace: openshift-machine-api
  8. spec:
  9. replicas: 1
  10. selector:
  11. matchLabels:
  12. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  13. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a (1)
  14. template:
  15. metadata:
  16. creationTimestamp: null
  17. labels:
  18. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  19. machine.openshift.io/cluster-api-machine-role: <infra> (2)
  20. machine.openshift.io/cluster-api-machine-type: <infra> (2)
  21. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a (1)
  22. spec:
  23. metadata:
  24. labels:
  25. node-role.kubernetes.io/infra: "" (2)
  26. taints: (3)
  27. - key: node-role.kubernetes.io/infra
  28. effect: NoSchedule
  29. providerSpec:
  30. value:
  31. apiVersion: gcpprovider.openshift.io/v1beta1
  32. canIPForward: false
  33. credentialsSecret:
  34. name: gcp-cloud-credentials
  35. deletionProtection: false
  36. disks:
  37. - autoDelete: true
  38. boot: true
  39. image: <path_to_image> (4)
  40. labels: null
  41. sizeGb: 128
  42. type: pd-ssd
  43. kind: GCPMachineProviderSpec
  44. machineType: n1-standard-4
  45. metadata:
  46. creationTimestamp: null
  47. networkInterfaces:
  48. - network: <infrastructure_id>-network (1)
  49. subnetwork: <infrastructure_id>-worker-subnet (1)
  50. projectID: <project_name> (5)
  51. region: us-central1
  52. serviceAccounts:
  53. - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com (1) (5)
  54. scopes:
  55. - https://www.googleapis.com/auth/cloud-platform
  56. tags:
  57. - <infrastructure_id>-worker (1)
  58. userDataSecret:
  59. name: worker-user-data
  60. zone: us-central1-a
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
2Specify the <infra> node label.
3Specify a taint to prevent user workloads from being scheduled on infra nodes.
4Specify the path to the image that is used in current machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:
  1. $ oc -n openshift-machine-api \
  2. -o jsonpath=’{.spec.template.spec.providerSpec.value.disks[0].image}{“\n”}’ \
  3. get machineset/<infrastructure_id>-worker-a
5Specify the name of the GCP project that you use for your cluster.

Machine sets running on GCP support non-guaranteed preemptible VM instances. You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file.

Sample YAML for a machine set custom resource on RHOSP

This sample YAML defines a machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. machine.openshift.io/cluster-api-machine-role: <infra> (2)
  7. machine.openshift.io/cluster-api-machine-type: <infra> (2)
  8. name: <infrastructure_id>-infra (3)
  9. namespace: openshift-machine-api
  10. spec:
  11. replicas: <number_of_replicas>
  12. selector:
  13. matchLabels:
  14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (3)
  16. template:
  17. metadata:
  18. labels:
  19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  20. machine.openshift.io/cluster-api-machine-role: <infra> (2)
  21. machine.openshift.io/cluster-api-machine-type: <infra> (2)
  22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (3)
  23. spec:
  24. metadata:
  25. creationTimestamp: null
  26. labels:
  27. node-role.kubernetes.io/infra: ""
  28. taints: (4)
  29. - key: node-role.kubernetes.io/infra
  30. effect: NoSchedule
  31. providerSpec:
  32. value:
  33. apiVersion: openstackproviderconfig.openshift.io/v1alpha1
  34. cloudName: openstack
  35. cloudsSecret:
  36. name: openstack-cloud-credentials
  37. namespace: openshift-machine-api
  38. flavor: <nova_flavor>
  39. image: <glance_image_name_or_location>
  40. serverGroupID: <optional_UUID_of_server_group> (5)
  41. kind: OpenstackProviderSpec
  42. networks: (6)
  43. - filter: {}
  44. subnets:
  45. - filter:
  46. name: <subnet_name>
  47. tags: openshiftClusterID=<infrastructure_id> (1)
  48. primarySubnet: <rhosp_subnet_UUID> (7)
  49. securityGroups:
  50. - filter: {}
  51. name: <infrastructure_id>-worker (1)
  52. serverMetadata:
  53. Name: <infrastructure_id>-worker (1)
  54. openshiftClusterID: <infrastructure_id> (1)
  55. tags:
  56. - openshiftClusterID=<infrastructure_id> (1)
  57. trunk: true
  58. userDataSecret:
  59. name: worker-user-data (2)
  60. availabilityZone: <optional_openstack_availability_zone>
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
2Specify the <infra> node label.
3Specify the infrastructure ID and <infra> node label.
4Specify a taint to prevent user workloads from being scheduled on infra nodes.
5To set a server group policy for the MachineSet, enter the value that is returned from creating a server group. For most deployments, anti-affinity or soft-anti-affinity policies are recommended.
6Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value.
7Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file.

Sample YAML for a machine set custom resource on oVirt

This sample YAML defines a machine set that runs on oVirt and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. labels:
  5. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  6. machine.openshift.io/cluster-api-machine-role: <role> (2)
  7. machine.openshift.io/cluster-api-machine-type: <role> (2)
  8. name: <infrastructure_id>-<role> (3)
  9. namespace: openshift-machine-api
  10. spec:
  11. replicas: <number_of_replicas> (4)
  12. Selector: (5)
  13. matchLabels:
  14. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  15. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
  16. template:
  17. metadata:
  18. labels:
  19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  20. machine.openshift.io/cluster-api-machine-role: <role> (2)
  21. machine.openshift.io/cluster-api-machine-type: <role> (2)
  22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> (3)
  23. spec:
  24. metadata:
  25. labels:
  26. node-role.kubernetes.io/<role>: "" (2)
  27. providerSpec:
  28. value:
  29. apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1
  30. cluster_id: <ovirt_cluster_id> (6)
  31. template_name: <ovirt_template_name> (7)
  32. instance_type_id: <instance_type_id> (8)
  33. cpu: (9)
  34. sockets: <number_of_sockets> (10)
  35. cores: <number_of_cores> (11)
  36. threads: <number_of_threads> (12)
  37. memory_mb: <memory_size> (13)
  38. os_disk: (14)
  39. size_gb: <disk_size> (15)
  40. network_interfaces: (16)
  41. vnic_profile_id: <vnic_profile_id> (17)
  42. credentialsSecret:
  43. name: ovirt-credentials (18)
  44. kind: OvirtMachineProviderSpec
  45. type: <workload_type> (19)
  46. userDataSecret:
  47. name: worker-user-data
  48. affinityGroupsNames:
  49. - compute (20)
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
2Specify the node label to add.
3Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters.
4Specify the number of machines to create.
5Selector for the machines.
6Specify the UUID for the oVirt cluster to which this VM instance belongs.
7Specify the oVirt VM template to use to create the machine.
8Optional: Specify the VM instance type. If you include this parameter, you do not need to specify the hardware parameters of the VM including CPU and memory because this parameter overrides all hardware parameters.
9Optional: The CPU field contains the CPU’s configuration, including sockets, cores, and threads.
10Optional: Specify the number of sockets for a VM.
11Optional: Specify the number of cores per socket.
12Optional: Specify the number of threads per core.
13Optional: Specify the size of a VM’s memory in MiB.
14Optional: Root disk of the node.
15Optional: Specify the size of the bootable disk in GiB.
16Optional: List of the network interfaces of the VM. If you include this parameter, OKD discards all network interfaces from the template and creates new ones.
17Optional: Specify the vNIC profile ID.
18Specify the name of the secret that holds the oVirt credentials.
19Optional: Specify the workload type for which the instance is optimized. This value affects the oVirt VM parameter. Supported values: desktop, server (default), high_performance. high_performance improves performance on the VM, but there are limitations. For example, you cannot access the VM with a graphical console. For more information see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide.
20A list of affinity group names that should be applied to the VMs. The affinity groups must exist in oVirt.

Because oVirt uses a template when creating a VM, if you do not specify a value for an optional parameter, oVirt uses the value for that parameter that is specified in the template.

Sample YAML for a machine set custom resource on vSphere

This sample YAML defines a machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "".

In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add.

  1. apiVersion: machine.openshift.io/v1beta1
  2. kind: MachineSet
  3. metadata:
  4. creationTimestamp: null
  5. labels:
  6. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  7. name: <infrastructure_id>-infra (2)
  8. namespace: openshift-machine-api
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  14. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (2)
  15. template:
  16. metadata:
  17. creationTimestamp: null
  18. labels:
  19. machine.openshift.io/cluster-api-cluster: <infrastructure_id> (1)
  20. machine.openshift.io/cluster-api-machine-role: <infra> (3)
  21. machine.openshift.io/cluster-api-machine-type: <infra> (3)
  22. machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra (2)
  23. spec:
  24. metadata:
  25. creationTimestamp: null
  26. labels:
  27. node-role.kubernetes.io/infra: "" (3)
  28. taints: (4)
  29. - key: node-role.kubernetes.io/infra
  30. effect: NoSchedule
  31. providerSpec:
  32. value:
  33. apiVersion: vsphereprovider.openshift.io/v1beta1
  34. credentialsSecret:
  35. name: vsphere-cloud-credentials
  36. diskGiB: 120
  37. kind: VSphereMachineProviderSpec
  38. memoryMiB: 8192
  39. metadata:
  40. creationTimestamp: null
  41. network:
  42. devices:
  43. - networkName: "<vm_network_name>" (5)
  44. numCPUs: 4
  45. numCoresPerSocket: 1
  46. snapshot: ""
  47. template: <vm_template_name> (6)
  48. userDataSecret:
  49. name: worker-user-data
  50. workspace:
  51. datacenter: <vcenter_datacenter_name> (7)
  52. datastore: <vcenter_datastore_name> (8)
  53. folder: <vcenter_vm_folder_path> (9)
  54. resourcepool: <vsphere_resource_pool> (10)
  55. server: <vcenter_server_ip> (11)
1Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI (oc) installed, you can obtain the infrastructure ID by running the following command:
  1. $ oc get -o jsonpath=’{.status.infrastructureName}{“\n”}’ infrastructure cluster
2Specify the infrastructure ID and <infra> node label.
3Specify the <infra> node label.
4Specify a taint to prevent user workloads from being scheduled on infra nodes.
5Specify the vSphere VM network to deploy the machine set to.
6Specify the vSphere VM template to use, such as user-5ddjd-rhcos.
7Specify the vCenter Datacenter to deploy the machine set on.
8Specify the vCenter Datastore to deploy the machine set on.
9Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd.
10Specify the vSphere resource pool for your VMs.
11Specify the vCenter server IP or fully qualified domain name.

Creating a machine set

In addition to the ones created by the installation program, you can create your own machine sets to dynamically manage the machine compute resources for specific workloads of your choice.

Prerequisites

  • Deploy an OKD cluster.

  • Install the OpenShift CLI (oc).

  • Log in to oc as a user with cluster-admin permission.

Procedure

  1. Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml.

    Ensure that you set the <clusterID> and <role> parameter values.

    1. If you are not sure which value to set for a specific field, you can check an existing machine set from your cluster:

      1. $ oc get machinesets -n openshift-machine-api

      Example output

      1. NAME DESIRED CURRENT READY AVAILABLE AGE
      2. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
      3. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
      4. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
      5. agl030519-vplxk-worker-us-east-1d 0 0 55m
      6. agl030519-vplxk-worker-us-east-1e 0 0 55m
      7. agl030519-vplxk-worker-us-east-1f 0 0 55m
    2. Check values of a specific machine set:

      1. $ oc get machineset <machineset_name> -n \
      2. openshift-machine-api -o yaml

      Example output

      1. ...
      2. template:
      3. metadata:
      4. labels:
      5. machine.openshift.io/cluster-api-cluster: agl030519-vplxk (1)
      6. machine.openshift.io/cluster-api-machine-role: worker (2)
      7. machine.openshift.io/cluster-api-machine-type: worker
      8. machine.openshift.io/cluster-api-machineset: agl030519-vplxk-worker-us-east-1a
      1The cluster ID.
      2A default node label.
  2. Create the new MachineSet CR:

    1. $ oc create -f <file_name>.yaml
  3. View the list of machine sets:

    1. $ oc get machineset -n openshift-machine-api

    Example output

    1. NAME DESIRED CURRENT READY AVAILABLE AGE
    2. agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m
    3. agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m
    4. agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m
    5. agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m
    6. agl030519-vplxk-worker-us-east-1d 0 0 55m
    7. agl030519-vplxk-worker-us-east-1e 0 0 55m
    8. agl030519-vplxk-worker-us-east-1f 0 0 55m

    When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again.

Creating an infrastructure node

See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes (also known as the master nodes) are managed by the machine API.

Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app, nodes through labeling.

Procedure

  1. Add a label to the worker node that you want to act as application node:

    1. $ oc label node <node-name> node-role.kubernetes.io/app=""
  2. Add a label to the worker nodes that you want to act as infrastructure nodes:

    1. $ oc label node <node-name> node-role.kubernetes.io/infra=""
  3. Check to see if applicable nodes now have the infra role and app roles:

    1. $ oc get nodes
  4. Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod’s selector.

    If the default node selector key conflicts with the key of a pod’s label, then the default node selector is not applied.

    However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra=””, when a pod’s label is set to a different node role, such as node-role.kubernetes.io/master=””, can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles.

    You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts.

    1. Edit the Scheduler object:

      1. $ oc edit scheduler cluster
    2. Add the defaultNodeSelector field with the appropriate node selector:

      1. apiVersion: config.openshift.io/v1
      2. kind: Scheduler
      3. metadata:
      4. name: cluster
      5. ...
      6. spec:
      7. defaultNodeSelector: topology.kubernetes.io/region=us-east-1 (1)
      8. ...
      1This example node selector deploys pods on nodes in the us-east-1 region by default.
    3. Save the file to apply the changes.

  5. Move infrastructure resources to the newly labeled infra nodes.

Creating a machine config pool for infrastructure machines

If you need infrastructure machines to have dedicated configurations, you must create an infra pool.

Procedure

  1. Add a label to the node you want to assign as the infra node with a specific label:

    1. $ oc label node <node_name> <label>
    1. $ oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=
  2. Create a machine config pool that contains both the worker role and your custom role as machine config selector:

    1. $ cat infra.mcp.yaml

    Example output

    1. apiVersion: machineconfiguration.openshift.io/v1
    2. kind: MachineConfigPool
    3. metadata:
    4. name: infra
    5. spec:
    6. machineConfigSelector:
    7. matchExpressions:
    8. - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} (1)
    9. nodeSelector:
    10. matchLabels:
    11. node-role.kubernetes.io/infra: "" (2)
    1Add the worker role and your custom role.
    2Add the label you added to the node as a nodeSelector.

    Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool.

  3. After you have the YAML file, you can create the machine config pool:

    1. $ oc create -f infra.mcp.yaml
  4. Check the machine configs to ensure that the infrastructure configuration rendered successfully:

    1. $ oc get machineconfig

    Example output

    1. NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED
    2. 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
    3. 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
    4. 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
    5. 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
    6. 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
    7. 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
    8. 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
    9. 99-master-ssh 3.2.0 31d
    10. 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d
    11. 99-worker-ssh 3.2.0 31d
    12. rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m
    13. rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
    14. rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d
    15. rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d
    16. rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d
    17. rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h
    18. rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
    19. rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
    20. rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d
    21. rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h
    22. rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d
    23. rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d
    24. rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d

    You should see a new machine config, with the rendered-infra-* prefix.

  5. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra. Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes.

    After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration.

    1. Create a machine config:

      1. $ cat infra.mc.yaml

      Example output

      1. apiVersion: machineconfiguration.openshift.io/v1
      2. kind: MachineConfig
      3. metadata:
      4. name: 51-infra
      5. labels:
      6. machineconfiguration.openshift.io/role: infra (1)
      7. spec:
      8. config:
      9. ignition:
      10. version: 3.2.0
      11. storage:
      12. files:
      13. - path: /etc/infratest
      14. mode: 0644
      15. contents:
      16. source: data:,infra
      1Add the label you added to the node as a nodeSelector.
    2. Apply the machine config to the infra-labeled nodes:

      1. $ oc create -f infra.mc.yaml
  6. Confirm that your new machine config pool is available:

    1. $ oc get mcp

    Example output

    1. NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
    2. infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s
    3. master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m
    4. worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m

    In this example, a worker node was changed to an infra node.

Additional resources

Assigning machine set resources to infrastructure nodes

After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied.

However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control.

Binding infrastructure node workloads using taints and tolerations

If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it.

It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions.

Prerequisites

  • Configure additional MachineSet objects in your OKD cluster.

Procedure

  1. Add a taint to the infra node to prevent scheduling user workloads on it:

    1. Determine if the node has the taint:

      1. $ oc describe nodes <node_name>

      Sample output

      1. oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
      2. Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l
      3. Roles: worker
      4. ...
      5. Taints: node-role.kubernetes.io/infra:NoSchedule
      6. ...

      This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the next step.

    2. If you have not configured a taint to prevent scheduling user workloads on it:

      1. $ oc adm taint nodes <node_name> <key>:<effect>

      For example:

      1. $ oc adm taint nodes node1 node-role.kubernetes.io/infra:NoSchedule

      This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule. Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node.

      If a descheduler is used, pods violating node taints could be evicted from the cluster.

  2. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification:

    1. tolerations:
    2. - effect: NoSchedule (1)
    3. key: node-role.kubernetes.io/infra (2)
    4. operator: Exists (3)
    1Specify the effect that you added to the node.
    2Specify the key that you added to the node.
    3Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node.

    This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node.

    Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator.

  3. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details.

Additional resources

Moving resources to infrastructure machine sets

Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created.

Moving the router

You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node.

Prerequisites

  • Configure additional machine sets in your OKD cluster.

Procedure

  1. View the IngressController custom resource for the router Operator:

    1. $ oc get ingresscontroller default -n openshift-ingress-operator -o yaml

    The command output resembles the following text:

    1. apiVersion: operator.openshift.io/v1
    2. kind: IngressController
    3. metadata:
    4. creationTimestamp: 2019-04-18T12:35:39Z
    5. finalizers:
    6. - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller
    7. generation: 1
    8. name: default
    9. namespace: openshift-ingress-operator
    10. resourceVersion: "11341"
    11. selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default
    12. uid: 79509e05-61d6-11e9-bc55-02ce4781844a
    13. spec: {}
    14. status:
    15. availableReplicas: 2
    16. conditions:
    17. - lastTransitionTime: 2019-04-18T12:36:15Z
    18. status: "True"
    19. type: Available
    20. domain: apps.<cluster>.example.com
    21. endpointPublishingStrategy:
    22. type: LoadBalancerService
    23. selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default
  2. Edit the ingresscontroller resource and change the nodeSelector to use the infra label:

    1. $ oc edit ingresscontroller default -n openshift-ingress-operator

    Add the nodeSelector stanza that references the infra label to the spec section, as shown:

    1. spec:
    2. nodePlacement:
    3. nodeSelector:
    4. matchLabels:
    5. node-role.kubernetes.io/infra: ""
  3. Confirm that the router pod is running on the infra node.

    1. View the list of router pods and note the node name of the running pod:

      1. $ oc get pod -n openshift-ingress -o wide

      Example output

      1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      2. router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none>
      3. router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>

      In this example, the running pod is on the ip-10-0-217-226.ec2.internal node.

    2. View the node status of the running pod:

      1. $ oc get node <node_name> (1)
      1Specify the <node_name> that you obtained from the pod list.

      Example output

      1. NAME STATUS ROLES AGE VERSION
      2. ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.21.0

      Because the role list includes infra, the pod is running on the correct node.

Moving the default registry

You configure the registry Operator to deploy its pods to different nodes.

Prerequisites

  • Configure additional machine sets in your OKD cluster.

Procedure

  1. View the config/instance object:

    1. $ oc get configs.imageregistry.operator.openshift.io/cluster -o yaml

    Example output

    1. apiVersion: imageregistry.operator.openshift.io/v1
    2. kind: Config
    3. metadata:
    4. creationTimestamp: 2019-02-05T13:52:05Z
    5. finalizers:
    6. - imageregistry.operator.openshift.io/finalizer
    7. generation: 1
    8. name: cluster
    9. resourceVersion: "56174"
    10. selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
    11. uid: 36fd3724-294d-11e9-a524-12ffeee2931b
    12. spec:
    13. httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623
    14. logging: 2
    15. managementState: Managed
    16. proxy: {}
    17. replicas: 1
    18. requests:
    19. read: {}
    20. write: {}
    21. storage:
    22. s3:
    23. bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c
    24. region: us-east-1
    25. status:
    26. ...
  2. Edit the config/instance object:

    1. $ oc edit configs.imageregistry.operator.openshift.io/cluster
  3. Modify the spec section of the object to resemble the following YAML:

    1. spec:
    2. affinity:
    3. podAntiAffinity:
    4. preferredDuringSchedulingIgnoredDuringExecution:
    5. - podAffinityTerm:
    6. namespaces:
    7. - openshift-image-registry
    8. topologyKey: kubernetes.io/hostname
    9. weight: 100
    10. logLevel: Normal
    11. managementState: Managed
    12. nodeSelector:
    13. node-role.kubernetes.io/infra: ""
  4. Verify the registry pod has been moved to the infrastructure node.

    1. Run the following command to identify the node where the registry pod is located:

      1. $ oc get pods -o wide -n openshift-image-registry
    2. Confirm the node has the label you specified:

      1. $ oc describe node <node_name>

      Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list.

Moving the monitoring solution

By default, the Prometheus Cluster Monitoring stack, which contains Prometheus, Grafana, and AlertManager, is deployed to provide cluster monitoring. It is managed by the Cluster Monitoring Operator. To move its components to different machines, you create and apply a custom config map.

Procedure

  1. Save the following ConfigMap definition as the cluster-monitoring-configmap.yaml file:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: cluster-monitoring-config
    5. namespace: openshift-monitoring
    6. data:
    7. config.yaml: |+
    8. alertmanagerMain:
    9. nodeSelector:
    10. node-role.kubernetes.io/infra: ""
    11. prometheusK8s:
    12. nodeSelector:
    13. node-role.kubernetes.io/infra: ""
    14. prometheusOperator:
    15. nodeSelector:
    16. node-role.kubernetes.io/infra: ""
    17. grafana:
    18. nodeSelector:
    19. node-role.kubernetes.io/infra: ""
    20. k8sPrometheusAdapter:
    21. nodeSelector:
    22. node-role.kubernetes.io/infra: ""
    23. kubeStateMetrics:
    24. nodeSelector:
    25. node-role.kubernetes.io/infra: ""
    26. telemeterClient:
    27. nodeSelector:
    28. node-role.kubernetes.io/infra: ""
    29. openshiftStateMetrics:
    30. nodeSelector:
    31. node-role.kubernetes.io/infra: ""
    32. thanosQuerier:
    33. nodeSelector:
    34. node-role.kubernetes.io/infra: ""

    Running this config map forces the components of the monitoring stack to redeploy to infrastructure nodes.

  2. Apply the new config map:

    1. $ oc create -f cluster-monitoring-configmap.yaml
  3. Watch the monitoring pods move to the new machines:

    1. $ watch 'oc get pod -n openshift-monitoring -o wide'
  4. If a component has not moved to the infra node, delete the pod with this component:

    1. $ oc delete pod -n openshift-monitoring <pod>

    The component from the deleted pod is re-created on the infra node.

Moving OpenShift Logging resources

You can configure the Cluster Logging Operator to deploy the pods for OpenShift Logging components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location.

For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements.

Prerequisites

  • OpenShift Logging and Elasticsearch must be installed. These features are not installed by default.

Procedure

  1. Edit the ClusterLogging custom resource (CR) in the openshift-logging project:

    1. $ oc edit ClusterLogging instance
    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. ...
    4. spec:
    5. collection:
    6. logs:
    7. fluentd:
    8. resources: null
    9. type: fluentd
    10. logStore:
    11. elasticsearch:
    12. nodeCount: 3
    13. nodeSelector: (1)
    14. node-role.kubernetes.io/infra: ''
    15. redundancyPolicy: SingleRedundancy
    16. resources:
    17. limits:
    18. cpu: 500m
    19. memory: 16Gi
    20. requests:
    21. cpu: 500m
    22. memory: 16Gi
    23. storage: {}
    24. type: elasticsearch
    25. managementState: Managed
    26. visualization:
    27. kibana:
    28. nodeSelector: (1)
    29. node-role.kubernetes.io/infra: ''
    30. proxy:
    31. resources: null
    32. replicas: 1
    33. resources: null
    34. type: kibana
    35. ...
1Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node.

Verification

To verify that a component has moved, you can use the oc get pod -o wide command.

For example:

  • You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node:

    1. $ oc get pod kibana-5b8bdf44f9-ccpq9 -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>
  • You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node:

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.21.0
    3. ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.21.0
    4. ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.21.0
    5. ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.21.0
    6. ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.21.0
    7. ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.21.0
    8. ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.21.0

    Note that the node has a node-role.kubernetes.io/infra: '' label:

    1. $ oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml

    Example output

    1. kind: Node
    2. apiVersion: v1
    3. metadata:
    4. name: ip-10-0-139-48.us-east-2.compute.internal
    5. selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal
    6. uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751
    7. resourceVersion: '39083'
    8. creationTimestamp: '2020-04-13T19:07:55Z'
    9. labels:
    10. node-role.kubernetes.io/infra: ''
    11. ...
  • To move the Kibana pod, edit the ClusterLogging CR to add a node selector:

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. ...
    4. spec:
    5. ...
    6. visualization:
    7. kibana:
    8. nodeSelector: (1)
    9. node-role.kubernetes.io/infra: ''
    10. proxy:
    11. resources: null
    12. replicas: 1
    13. resources: null
    14. type: kibana
    1Add a node selector to match the label in the node specification.
  • After you save the CR, the current Kibana pod is terminated and new pod is deployed:

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m
    3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m
    4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m
    5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m
    6. fluentd-42dzz 1/1 Running 0 28m
    7. fluentd-d74rq 1/1 Running 0 28m
    8. fluentd-m5vr9 1/1 Running 0 28m
    9. fluentd-nkxl7 1/1 Running 0 28m
    10. fluentd-pdvqb 1/1 Running 0 28m
    11. fluentd-tflh6 1/1 Running 0 28m
    12. kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s
    13. kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s
  • The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node:

    1. $ oc get pod kibana-7d85dcffc8-bfpfp -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>
  • After a few moments, the original Kibana pod is removed.

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m
    3. elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m
    4. elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m
    5. elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m
    6. fluentd-42dzz 1/1 Running 0 29m
    7. fluentd-d74rq 1/1 Running 0 29m
    8. fluentd-m5vr9 1/1 Running 0 29m
    9. fluentd-nkxl7 1/1 Running 0 29m
    10. fluentd-pdvqb 1/1 Running 0 29m
    11. fluentd-tflh6 1/1 Running 0 29m
    12. kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s

Additional resources