Adding Operators to a cluster

Using Operator Lifecycle Manager (OLM), cluster administrators can install OLM-based Operators to an OKD cluster.

For information on how OLM handles updates for installed Operators colocated in the same namespace, as well as an alternative method for installing Operators with custom global Operator groups, see Multitenancy and Operator colocation.

Prerequisites

  • Ensure that you have downloaded the pull secret from the Red Hat OpenShift Cluster Manager as shown in Obtaining the installation program in the installation documentation for your platform.

    If you have the pull secret, add the redhat-operators catalog to the OperatorHub custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.

About Operator installation with OperatorHub

OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster.

As a cluster administrator, you can install an Operator from OperatorHub by using the OKD web console or CLI. Subscribing an Operator to one or more namespaces makes the Operator available to developers on your cluster.

During installation, you must determine the following initial settings for the Operator:

Installation Mode

Choose All namespaces on the cluster (default) to have the Operator installed on all namespaces or choose individual namespaces, if available, to only install the Operator on selected namespaces. This example chooses All namespaces…​ to make the Operator available to all users and projects.

Update Channel

If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list.

Approval Strategy

You can choose automatic or manual updates.

If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention.

If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version.

Additional resources

Installing from OperatorHub using the web console

You can install and subscribe to an Operator from OperatorHub by using the OKD web console.

Prerequisites

  • Access to an OKD cluster using an account with cluster-admin permissions.

Procedure

  1. Navigate in the web console to the Operators → OperatorHub page.

  2. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator.

    You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.

  3. Select the Operator to display additional information.

    Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing.

  4. Read the information about the Operator and click Install.

  5. On the Install Operator page:

    1. Select one of the following:

      • All namespaces on the cluster (default) installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available.

      • A specific namespace on the cluster allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace.

    2. If the cluster is in AWS STS mode, enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field.

      Entering the ARN

      To create the role’s ARN, follow the procedure described in Preparing AWS account.

    3. If more than one update channel is available, select an Update channel.

    4. Select Automatic or Manual approval strategy, as described earlier.

      If the web console shows that the cluster is in “STS mode”, you must set Update approval to Manual.

      Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.

  6. Click Install to make the Operator available to the selected namespaces on this OKD cluster.

    1. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan.

      After approving on the Install Plan page, the subscription upgrade status moves to Up to date.

    2. If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention.

  7. After the upgrade status of the subscription is Up to date, select Operators → Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace.

    For the All namespaces…​ installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces.

    If it does not:

    1. Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace…​ installation mode was selected) on the Workloads → Pods page that are reporting issues to troubleshoot further.

Installing from OperatorHub using the CLI

Instead of using the OKD web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object.

Prerequisites

  • Access to an OKD cluster using an account with cluster-admin permissions.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. View the list of Operators available to the cluster from OperatorHub:

    1. $ oc get packagemanifests -n openshift-marketplace

    Example output

    1. NAME CATALOG AGE
    2. 3scale-operator Red Hat Operators 91m
    3. advanced-cluster-management Red Hat Operators 91m
    4. amq7-cert-manager Red Hat Operators 91m
    5. ...
    6. couchbase-enterprise-certified Certified Operators 91m
    7. crunchy-postgres-operator Certified Operators 91m
    8. mongodb-enterprise Certified Operators 91m
    9. ...
    10. etcd Community Operators 91m
    11. jaeger Community Operators 91m
    12. kubefed Community Operators 91m
    13. ...

    Note the catalog for your desired Operator.

  2. Inspect your desired Operator to verify its supported install modes and available channels:

    1. $ oc describe packagemanifests <operator_name> -n openshift-marketplace
  3. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group.

    The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces, then the openshift-operators namespace already has an appropriate Operator group in place.

    However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one.

    The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode.

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml:

      Example OperatorGroup object

      1. apiVersion: operators.coreos.com/v1
      2. kind: OperatorGroup
      3. metadata:
      4. name: <operatorgroup_name>
      5. namespace: <namespace>
      6. spec:
      7. targetNamespaces:
      8. - <namespace>
    2. Create the OperatorGroup object:

      1. $ oc apply -f operatorgroup.yaml
  4. Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml:

    Example Subscription object

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: Subscription
    3. metadata:
    4. name: <subscription_name>
    5. namespace: openshift-operators (1)
    6. spec:
    7. channel: <channel_name> (2)
    8. name: <operator_name> (3)
    9. source: redhat-operators (4)
    10. sourceNamespace: openshift-marketplace (5)
    11. config:
    12. env: (6)
    13. - name: ARGS
    14. value: "-v=10"
    15. envFrom: (7)
    16. - secretRef:
    17. name: license-secret
    18. volumes: (8)
    19. - name: <volume_name>
    20. configMap:
    21. name: <configmap_name>
    22. volumeMounts: (9)
    23. - mountPath: <directory_name>
    24. name: <volume_name>
    25. tolerations: (10)
    26. - operator: "Exists"
    27. resources: (11)
    28. requests:
    29. memory: "64Mi"
    30. cpu: "250m"
    31. limits:
    32. memory: "128Mi"
    33. cpu: "500m"
    34. nodeSelector: (12)
    35. foo: bar
    1For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage.
    2Name of the channel to subscribe to.
    3Name of the Operator to subscribe to.
    4Name of the catalog source that provides the Operator.
    5Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources.
    6The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM.
    7The envFrom parameter defines a list of sources to populate Environment Variables in the container.
    8The volumes parameter defines a list of Volumes that must exist on the pod created by OLM.
    9The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator.
    10The tolerations parameter defines a list of Tolerations for the pod created by OLM.
    11The resources parameter defines resource constraints for all the containers in the pod created by OLM.
    12The nodeSelector parameter defines a NodeSelector for the pod created by OLM.
  5. If the cluster is in STS mode, include the following fields in the Subscription object:

    1. kind: Subscription
    2. # ...
    3. spec:
    4. installPlanApproval: Manual (1)
    5. config:
    6. env:
    7. - name: ROLEARN
    8. value: "<role_arn>" (2)
    1Subscriptions with automatic update approvals are not recommended because there might be permission changes to make prior to updating. Subscriptions with manual update approvals ensure that administrators have the opportunity to verify the permissions of the later version and take any necessary steps prior to update.
    2Include the role ARN details.
  6. Create the Subscription object:

    1. $ oc apply -f sub.yaml

    At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation.

Additional resources

Installing a specific version of an Operator

You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object.

Prerequisites

  • Access to an OKD cluster using an account with cluster-admin permissions

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Look up the available versions and channels of the Operator you want to install by running the following command:

    Command syntax

    1. $ oc describe packagemanifests <operator_name> -n <catalog_namespace>

    For example, the following command prints the available channels and versions of the Red Hat Quay Operator from OperatorHub:

    Example command

    1. $ oc describe packagemanifests quay-operator -n openshift-marketplace

    Example output

    1. Name: quay-operator
    2. Namespace: operator-marketplace
    3. Labels: catalog=redhat-operators
    4. catalog-namespace=openshift-marketplace
    5. hypershift.openshift.io/managed=true
    6. operatorframework.io/arch.amd64=supported
    7. operatorframework.io/os.linux=supported
    8. provider=Red Hat
    9. provider-url=
    10. Annotations: <none>
    11. API Version: packages.operators.coreos.com/v1
    12. Kind: PackageManifest
    13. ...
    14. Current CSV: quay-operator.v3.7.11
    15. ...
    16. Entries:
    17. Name: quay-operator.v3.7.11
    18. Version: 3.7.11
    19. Name: quay-operator.v3.7.10
    20. Version: 3.7.10
    21. Name: quay-operator.v3.7.9
    22. Version: 3.7.9
    23. Name: quay-operator.v3.7.8
    24. Version: 3.7.8
    25. Name: quay-operator.v3.7.7
    26. Version: 3.7.7
    27. Name: quay-operator.v3.7.6
    28. Version: 3.7.6
    29. Name: quay-operator.v3.7.5
    30. Version: 3.7.5
    31. Name: quay-operator.v3.7.4
    32. Version: 3.7.4
    33. Name: quay-operator.v3.7.3
    34. Version: 3.7.3
    35. Name: quay-operator.v3.7.2
    36. Version: 3.7.2
    37. Name: quay-operator.v3.7.1
    38. Version: 3.7.1
    39. Name: quay-operator.v3.7.0
    40. Version: 3.7.0
    41. Name: stable-3.7
    42. ...
    43. Current CSV: quay-operator.v3.8.5
    44. ...
    45. Entries:
    46. Name: quay-operator.v3.8.5
    47. Version: 3.8.5
    48. Name: quay-operator.v3.8.4
    49. Version: 3.8.4
    50. Name: quay-operator.v3.8.3
    51. Version: 3.8.3
    52. Name: quay-operator.v3.8.2
    53. Version: 3.8.2
    54. Name: quay-operator.v3.8.1
    55. Version: 3.8.1
    56. Name: quay-operator.v3.8.0
    57. Version: 3.8.0
    58. Name: stable-3.8
    59. Default Channel: stable-3.8
    60. Package Name: quay-operator

    You can print an Operator’s version and channel information in the YAML format by running the following command:

    1. $ oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml
    • If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog:

      1. $ oc get packagemanifest \
      2. --selector=catalog=<catalogsource_name> \
      3. --field-selector metadata.name=<operator_name> \
      4. -n <catalog_namespace> -o yaml

      If you do not specify the Operator’s catalog, running the oc get packagemanifest and oc describe packagemanifest commands might return a package from an unexpected catalog if the following conditions are met:

      • Multiple catalogs are installed in the same namespace.

      • The catalogs contain the same Operators or Operators with the same name.

  2. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required role-based access control (RBAC) access for all Operators in the same namespace as the Operator group.

    The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces mode, then the openshift-operators namespace already has an appropriate Operator group in place.

    However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one:

    1. Create an OperatorGroup object YAML file, for example operatorgroup.yaml:

      Example OperatorGroup object

      1. apiVersion: operators.coreos.com/v1
      2. kind: OperatorGroup
      3. metadata:
      4. name: <operatorgroup_name>
      5. namespace: <namespace>
      6. spec:
      7. targetNamespaces:
      8. - <namespace>
    2. Create the OperatorGroup object:

      1. $ oc apply -f operatorgroup.yaml
  3. Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog.

    For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.7.10:

    Subscription with a specific starting Operator version

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: Subscription
    3. metadata:
    4. name: quay-operator
    5. namespace: quay
    6. spec:
    7. channel: quay-operator.v3.7.10
    8. installPlanApproval: Manual (1)
    9. name: quay-operator
    10. source: redhat-operators
    11. sourceNamespace: openshift-marketplace
    12. startingCSV: quay-operator.v3.7.10 (2)
    1Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation.
    2Set a specific version of an Operator CSV.
  4. Create the Subscription object:

    1. $ oc apply -f sub.yaml
  5. Manually approve the pending install plan to complete the Operator installation.

Additional resources

Installing a specific version of an Operator in the web console

You can install a specific version of an Operator by using the OperatorHub in the web console. You are able to browse the various versions of an operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install.

Prerequisites

  • You must have administrator privileges.

Procedure

  1. From the web console, click OperatorsOperatorHub.

  2. Select an Operator you want to install.

  3. From the selected Operator, you can select a Channel and Version from the lists.

    The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise Manual approval is required when not installing the latest version for the selected channel.

    Manual approval applies to all operators installed in a namespace.

    Installing an Operator with manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. Install Operators into separate namespaces for updating independently.

  4. Click Install

Verification

  • When the operator is installed, the metadata indicates which channel and version are installed.

    The channel and version dropdown menus are still available for viewing other version metadata in this catalog context.

Preparing for multiple instances of an Operator for multitenant clusters

As a cluster administrator, you can add multiple instances of an Operator for use in multitenant clusters. This is an alternative solution to either using the standard All namespaces install mode, which can be considered to violate the principle of least privilege, or the Multinamespace mode, which is not widely adopted. For more information, see “Operators in multitenant clusters”.

In the following procedure, the tenant is a user or group of users that share common access and privileges for a set of deployed workloads. The tenant Operator is the instance of an Operator that is intended for use by only that tenant.

Prerequisites

  • All instances of the Operator you want to install must be the same version across a given cluster.

    For more information on this and other limitations, see “Operators in multitenant clusters”.

Procedure

  1. Before installing the Operator, create a namespace for the tenant Operator that is separate from the tenant’s namespace. For example, if the tenant’s namespace is team1, you might create a team1-operator namespace:

    1. Define a Namespace resource and save the YAML file, for example, team1-operator.yaml:

      1. apiVersion: v1
      2. kind: Namespace
      3. metadata:
      4. name: team1-operator
    2. Create the namespace by running the following command:

      1. $ oc create -f team1-operator.yaml
  2. Create an Operator group for the tenant Operator scoped to the tenant’s namespace, with only that one namespace entry in the spec.targetNamespaces list:

    1. Define an OperatorGroup resource and save the YAML file, for example, team1-operatorgroup.yaml:

      1. apiVersion: operators.coreos.com/v1
      2. kind: OperatorGroup
      3. metadata:
      4. name: team1-operatorgroup
      5. namespace: team1-operator
      6. spec:
      7. targetNamespaces:
      8. - team1 (1)
      1Define only the tenant’s namespace in the spec.targetNamespaces list.
    2. Create the Operator group by running the following command:

      1. $ oc create -f team1-operatorgroup.yaml

Next steps

  • Install the Operator in the tenant Operator namespace. This task is more easily performed by using the OperatorHub in the web console instead of the CLI; for a detailed procedure, see Installing from OperatorHub using the web console.

    After completing the Operator installation, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator’s pod nor its service account are visible or usable by the tenant.

Additional resources

Installing global Operators in custom namespaces

When installing Operators with the OKD web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. This can cause issues related to shared install plans and update policies between all Operators in the namespace. For more details on these limitations, see “Multitenancy and Operator colocation”.

As a cluster administrator, you can bypass this default behavior manually by creating a custom global namespace and using that namespace to install your individual or scoped set of Operators and their dependencies.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

Procedure

  1. Before installing the Operator, create a namespace for the installation of your desired Operator. This installation namespace will become the custom global namespace:

    1. Define a Namespace resource and save the YAML file, for example, global-operators.yaml:

      1. apiVersion: v1
      2. kind: Namespace
      3. metadata:
      4. name: global-operators
    2. Create the namespace by running the following command:

      1. $ oc create -f global-operators.yaml
  2. Create a custom global Operator group, which is an Operator group that watches all namespaces:

    1. Define an OperatorGroup resource and save the YAML file, for example, global-operatorgroup.yaml. Omit both the spec.selector and spec.targetNamespaces fields to make it a global Operator group, which selects all namespaces:

      1. apiVersion: operators.coreos.com/v1
      2. kind: OperatorGroup
      3. metadata:
      4. name: global-operatorgroup
      5. namespace: global-operators

      The status.namespaces of a created global Operator group contains the empty string (“”), which signals to a consuming Operator that it should watch all namespaces.

    2. Create the Operator group by running the following command:

      1. $ oc create -f global-operatorgroup.yaml

Next steps

  • Install the desired Operator in your custom global namespace. Because the web console does not populate the Installed Namespace menu during Operator installation with custom global namespaces, this task can only be performed with the OpenShift CLI (oc). For a detailed procedure, see Installing from OperatorHub using the CLI.

    When you initiate the Operator installation, if the Operator has dependencies, the dependencies are also automatically installed in the custom global namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans.

Additional resources

Pod placement of Operator workloads

By default, Operator Lifecycle Manager (OLM) places pods on arbitrary worker nodes when installing an Operator or deploying Operand workloads. As an administrator, you can use projects with a combination of node selectors, taints, and tolerations to control the placement of Operators and Operands to specific nodes.

Controlling pod placement of Operator and Operand workloads has the following prerequisites:

  1. Determine a node or set of nodes to target for the pods per your requirements. If available, note an existing label, such as node-role.kubernetes.io/app, that identifies the node or nodes. Otherwise, add a label, such as myoperator, by using a compute machine set or editing the node directly. You will use this label in a later step as the node selector on your project.

  2. If you want to ensure that only pods with a certain label are allowed to run on the nodes, while steering unrelated workloads to other nodes, add a taint to the node or nodes by using a compute machine set or editing the node directly. Use an effect that ensures that new pods that do not match the taint cannot be scheduled on the nodes. For example, a myoperator:NoSchedule taint ensures that new pods that do not match the taint are not scheduled onto that node, but existing pods on the node are allowed to remain.

  3. Create a project that is configured with a default node selector and, if you added a taint, a matching toleration.

At this point, the project you created can be used to steer pods towards the specified nodes in the following scenarios:

For Operator pods

Administrators can create a Subscription object in the project as described in the following section. As a result, the Operator pods are placed on the specified nodes.

For Operand pods

Using an installed Operator, users can create an application in the project, which places the custom resource (CR) owned by the Operator in the project. As a result, the Operand pods are placed on the specified nodes, unless the Operator is deploying cluster-wide objects or resources in other namespaces, in which case this customized pod placement does not apply.

Additional resources

Controlling where an Operator is installed

By default, when you install an Operator, OKD installs the Operator pod to one of your worker nodes randomly. However, there might be situations where you want that pod scheduled on a specific node or set of nodes.

The following examples describe situations where you might want to schedule an Operator pod to a specific node or set of nodes:

  • If an Operator requires a particular platform, such as amd64 or arm64

  • If an Operator requires a particular operating system, such as Linux or Windows

  • If you want Operators that work together scheduled on the same host or on hosts located on the same rack

  • If you want Operators dispersed throughout the infrastructure to avoid downtime due to network or hardware issues

You can control where an Operator pod is installed by adding node affinity, pod affinity, or pod anti-affinity constraints to the Operator’s Subscription object. Node affinity is a set of rules used by the scheduler to determine where a pod can be placed. Pod affinity enables you to ensure that related pods are scheduled to the same node. Pod anti-affinity allows you to prevent a pod from being scheduled on a node.

The following examples show how to use node affinity or pod anti-affinity to install an instance of the Custom Metrics Autoscaler Operator to a specific node in the cluster:

Node affinity example that places the Operator pod on a specific node

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: openshift-custom-metrics-autoscaler-operator
  5. namespace: openshift-keda
  6. spec:
  7. name: my-package
  8. source: my-operators
  9. sourceNamespace: operator-registries
  10. config:
  11. affinity:
  12. nodeAffinity: (1)
  13. requiredDuringSchedulingIgnoredDuringExecution:
  14. nodeSelectorTerms:
  15. - matchExpressions:
  16. - key: kubernetes.io/hostname
  17. operator: In
  18. values:
  19. - ip-10-0-163-94.us-west-2.compute.internal
  20. #...
1A node affinity that requires the Operator’s pod to be scheduled on a node named ip-10-0-163-94.us-west-2.compute.internal.

Node affinity example that places the Operator pod on a node with a specific platform

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: openshift-custom-metrics-autoscaler-operator
  5. namespace: openshift-keda
  6. spec:
  7. name: my-package
  8. source: my-operators
  9. sourceNamespace: operator-registries
  10. config:
  11. affinity:
  12. nodeAffinity: (1)
  13. requiredDuringSchedulingIgnoredDuringExecution:
  14. nodeSelectorTerms:
  15. - matchExpressions:
  16. - key: kubernetes.io/arch
  17. operator: In
  18. values:
  19. - arm64
  20. - key: kubernetes.io/os
  21. operator: In
  22. values:
  23. - linux
  24. #...
1A node affinity that requires the Operator’s pod to be scheduled on a node with the kubernetes.io/arch=arm64 and kubernetes.io/os=linux labels.

Pod affinity example that places the Operator pod on one or more specific nodes

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: openshift-custom-metrics-autoscaler-operator
  5. namespace: openshift-keda
  6. spec:
  7. name: my-package
  8. source: my-operators
  9. sourceNamespace: operator-registries
  10. config:
  11. affinity:
  12. podAffinity: (1)
  13. requiredDuringSchedulingIgnoredDuringExecution:
  14. - labelSelector:
  15. matchExpressions:
  16. - key: app
  17. operator: In
  18. values:
  19. - test
  20. topologyKey: kubernetes.io/hostname
  21. #...
1A pod affinity that places the Operator’s pod on a node that has pods with the app=test label.

Pod anti-affinity example that prevents the Operator pod from one or more specific nodes

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: openshift-custom-metrics-autoscaler-operator
  5. namespace: openshift-keda
  6. spec:
  7. name: my-package
  8. source: my-operators
  9. sourceNamespace: operator-registries
  10. config:
  11. affinity:
  12. podAntiAffinity: (1)
  13. requiredDuringSchedulingIgnoredDuringExecution:
  14. - labelSelector:
  15. matchExpressions:
  16. - key: cpu
  17. operator: In
  18. values:
  19. - high
  20. topologyKey: kubernetes.io/hostname
  21. #...
1A pod anti-affinity that prevents the Operator’s pod from being scheduled on a node that has pods with the cpu=high label.

Procedure

To control the placement of an Operator pod, complete the following steps:

  1. Install the Operator as usual.

  2. If needed, ensure that your nodes are labeled to properly respond to the affinity.

  3. Edit the Operator Subscription object to add an affinity:

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: Subscription
    3. metadata:
    4. name: openshift-custom-metrics-autoscaler-operator
    5. namespace: openshift-keda
    6. spec:
    7. name: my-package
    8. source: my-operators
    9. sourceNamespace: operator-registries
    10. config:
    11. affinity: (1)
    12. nodeAffinity:
    13. requiredDuringSchedulingIgnoredDuringExecution:
    14. nodeSelectorTerms:
    15. - matchExpressions:
    16. - key: kubernetes.io/hostname
    17. operator: In
    18. values:
    19. - ip-10-0-185-229.ec2.internal
    20. #...
    1Add a nodeAffinity, podAffinity, or podAntiAffinity. See the Additional resources section that follows for information about creating the affinity.

Verification

  • To ensure that the pod is deployed on the specific node, run the following command:

    1. $ oc get pods -o wide

    Example output

    1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    2. custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>

Additional resources