Updating managed clusters with the Topology Aware Lifecycle Manager

You can use the Topology Aware Lifecycle Manager (TALM) to manage the software lifecycle of OKD managed clusters. TALM uses Red Hat Advanced Cluster Management (RHACM) policies to perform changes on the target clusters.

Additional resources

Updating clusters in a disconnected environment

You can upgrade managed clusters and Operators for managed clusters that you have deployed using GitOps ZTP and Topology Aware Lifecycle Manager (TALM).

Setting up the environment

TALM can perform both platform and Operator updates.

You must mirror both the platform image and Operator images that you want to update to in your mirror registry before you can use TALM to update your disconnected clusters. Complete the following steps to mirror the images:

  • For platform updates, you must perform the following steps:

    1. Mirror the desired OKD image repository. Ensure that the desired platform image is mirrored by following the “Mirroring the OKD image repository” procedure linked in the Additional Resources. Save the contents of the imageContentSources section in the imageContentSources.yaml file:

      Example output

      1. imageContentSources:
      2. - mirrors:
      3. - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4
      4. source: quay.io/openshift-release-dev/ocp-release
      5. - mirrors:
      6. - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4
      7. source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
    2. Save the image signature of the desired platform image that was mirrored. You must add the image signature to the PolicyGenTemplate CR for platform updates. To get the image signature, perform the following steps:

      1. Specify the desired OKD tag by running the following command:

        1. $ OCP_RELEASE_NUMBER=<release_version>
      2. Specify the architecture of the server by running the following command:

        1. $ ARCHITECTURE=<server_architecture>
      3. Get the release image digest from Quay by running the following command

        1. $ DIGEST="$(oc adm release info quay.io/openshift-release-dev/ocp-release:${OCP_RELEASE_NUMBER}-${ARCHITECTURE} | sed -n 's/Pull From: .*@//p')"
      4. Set the digest algorithm by running the following command:

        1. $ DIGEST_ALGO="${DIGEST%%:*}"
      5. Set the digest signature by running the following command:

        1. $ DIGEST_ENCODED="${DIGEST#*:}"
      6. Get the image signature from the mirror.openshift.com website by running the following command:

        1. $ SIGNATURE_BASE64=$(curl -s "https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/${DIGEST_ALGO}=${DIGEST_ENCODED}/signature-1" | base64 -w0 && echo)
      7. Save the image signature to the checksum-<OCP_RELEASE_NUMBER>.yaml file by running the following commands:

        1. $ cat >checksum-${OCP_RELEASE_NUMBER}.yaml <<EOF
        2. ${DIGEST_ALGO}-${DIGEST_ENCODED}: ${SIGNATURE_BASE64}
        3. EOF
    3. Prepare the update graph. You have two options to prepare the update graph:

      1. Use the OpenShift Update Service.

        For more information about how to set up the graph on the hub cluster, see Deploy the operator for OpenShift Update Service and Build the graph data init container.

      2. Make a local copy of the upstream graph. Host the update graph on an http or https server in the disconnected environment that has access to the managed cluster. To download the update graph, use the following command:

        1. $ curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.12 -o ~/upgrade-graph_stable-4.12
  • For Operator updates, you must perform the following task:

    • Mirror the Operator catalogs. Ensure that the desired operator images are mirrored by following the procedure in the “Mirroring Operator catalogs for use with disconnected clusters” section.

Additional resources

Performing a platform update

You can perform a platform update with the TALM.

Prerequisites

  • Install the Topology Aware Lifecycle Manager (TALM).

  • Update ZTP to the latest version.

  • Provision one or more managed clusters with ZTP.

  • Mirror the desired image repository.

  • Log in as a user with cluster-admin privileges.

  • Create RHACM policies in the hub cluster.

Procedure

  1. Create a PolicyGenTemplate CR for the platform update:

    1. Save the following contents of the PolicyGenTemplate CR in the du-upgrade.yaml file.

      Example of PolicyGenTemplate for platform update

      1. apiVersion: ran.openshift.io/v1
      2. kind: PolicyGenTemplate
      3. metadata:
      4. name: "du-upgrade"
      5. namespace: "ztp-group-du-sno"
      6. spec:
      7. bindingRules:
      8. group-du-sno: ""
      9. mcp: "master"
      10. remediationAction: inform
      11. sourceFiles:
      12. - fileName: ImageSignature.yaml (1)
      13. policyName: "platform-upgrade-prep"
      14. binaryData:
      15. ${DIGEST_ALGO}-${DIGEST_ENCODED}: ${SIGNATURE_BASE64} (2)
      16. - fileName: DisconnectedICSP.yaml
      17. policyName: "platform-upgrade-prep"
      18. metadata:
      19. name: disconnected-internal-icsp-for-ocp
      20. spec:
      21. repositoryDigestMirrors: (3)
      22. - mirrors:
      23. - quay-intern.example.com/ocp4/openshift-release-dev
      24. source: quay.io/openshift-release-dev/ocp-release
      25. - mirrors:
      26. - quay-intern.example.com/ocp4/openshift-release-dev
      27. source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
      28. - fileName: ClusterVersion.yaml (4)
      29. policyName: "platform-upgrade-prep"
      30. metadata:
      31. name: version
      32. annotations:
      33. ran.openshift.io/ztp-deploy-wave: "1"
      34. spec:
      35. channel: "stable-4.12"
      36. upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.12
      37. - fileName: ClusterVersion.yaml (5)
      38. policyName: "platform-upgrade"
      39. metadata:
      40. name: version
      41. spec:
      42. channel: "stable-4.12"
      43. upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.12
      44. desiredUpdate:
      45. version: 4.12.4
      46. status:
      47. history:
      48. - version: 4.12.4
      49. state: "Completed"
      1The ConfigMap CR contains the signature of the desired release image to update to.
      2Shows the image signature of the desired OKD release. Get the signature from the checksum-${OCP_RELASE_NUMBER}.yaml file you saved when following the procedures in the “Setting up the environment” section.
      3Shows the mirror repository that contains the desired OKD image. Get the mirrors from the imageContentSources.yaml file that you saved when following the procedures in the “Setting up the environment” section.
      4Shows the ClusterVersion CR to update upstream.
      5Shows the ClusterVersion CR to trigger the update. The channel, upstream, and desiredVersion fields are all required for image pre-caching.

      The PolicyGenTemplate CR generates two policies:

      • The du-upgrade-platform-upgrade-prep policy does the preparation work for the platform update. It creates the ConfigMap CR for the desired release image signature, creates the image content source of the mirrored release image repository, and updates the cluster version with the desired update channel and the update graph reachable by the managed cluster in the disconnected environment.

      • The du-upgrade-platform-upgrade policy is used to perform platform upgrade.

    2. Add the du-upgrade.yaml file contents to the kustomization.yaml file located in the ZTP Git repository for the PolicyGenTemplate CRs and push the changes to the Git repository.

      ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster.

    3. Check the created policies by running the following command:

      1. $ oc get policies -A | grep platform-upgrade
  2. Apply the required update resources before starting the platform update with the TALM.

    1. Save the content of the platform-upgrade-prep ClusterUpgradeGroup CR with the du-upgrade-platform-upgrade-prep policy and the target managed clusters to the cgu-platform-upgrade-prep.yml file, as shown in the following example:

      1. apiVersion: ran.openshift.io/v1alpha1
      2. kind: ClusterGroupUpgrade
      3. metadata:
      4. name: cgu-platform-upgrade-prep
      5. namespace: default
      6. spec:
      7. managedPolicies:
      8. - du-upgrade-platform-upgrade-prep
      9. clusters:
      10. - spoke1
      11. remediationStrategy:
      12. maxConcurrency: 1
      13. enable: true
    2. Apply the policy to the hub cluster by running the following command:

      1. $ oc apply -f cgu-platform-upgrade-prep.yml
    3. Monitor the update process. Upon completion, ensure that the policy is compliant by running the following command:

      1. $ oc get policies --all-namespaces
  3. Create the ClusterGroupUpdate CR for the platform update with the spec.enable field set to false.

    1. Save the content of the platform update ClusterGroupUpdate CR with the du-upgrade-platform-upgrade policy and the target clusters to the cgu-platform-upgrade.yml file, as shown in the following example:

      1. apiVersion: ran.openshift.io/v1alpha1
      2. kind: ClusterGroupUpgrade
      3. metadata:
      4. name: cgu-platform-upgrade
      5. namespace: default
      6. spec:
      7. managedPolicies:
      8. - du-upgrade-platform-upgrade
      9. preCaching: false
      10. clusters:
      11. - spoke1
      12. remediationStrategy:
      13. maxConcurrency: 1
      14. enable: false
    2. Apply the ClusterGroupUpdate CR to the hub cluster by running the following command:

      1. $ oc apply -f cgu-platform-upgrade.yml
  4. Optional: Pre-cache the images for the platform update.

    1. Enable pre-caching in the ClusterGroupUpdate CR by running the following command:

      1. $ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \
      2. --patch '{"spec":{"preCaching": true}}' --type=merge
    2. Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the hub cluster:

      1. $ oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'
  5. Start the platform update:

    1. Enable the cgu-platform-upgrade policy and disable pre-caching by running the following command:

      1. $ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade \
      2. --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge
    2. Monitor the process. Upon completion, ensure that the policy is compliant by running the following command:

      1. $ oc get policies --all-namespaces

Additional resources

Performing an Operator update

You can perform an Operator update with the TALM.

Prerequisites

  • Install the Topology Aware Lifecycle Manager (TALM).

  • Update ZTP to the latest version.

  • Provision one or more managed clusters with ZTP.

  • Mirror the desired index image, bundle images, and all Operator images referenced in the bundle images.

  • Log in as a user with cluster-admin privileges.

  • Create RHACM policies in the hub cluster.

Procedure

  1. Update the PolicyGenTemplate CR for the Operator update.

    1. Update the du-upgrade PolicyGenTemplate CR with the following additional contents in the du-upgrade.yaml file:

      1. apiVersion: ran.openshift.io/v1
      2. kind: PolicyGenTemplate
      3. metadata:
      4. name: "du-upgrade"
      5. namespace: "ztp-group-du-sno"
      6. spec:
      7. bindingRules:
      8. group-du-sno: ""
      9. mcp: "master"
      10. remediationAction: inform
      11. sourceFiles:
      12. - fileName: DefaultCatsrc.yaml
      13. remediationAction: inform
      14. policyName: "operator-catsrc-policy"
      15. metadata:
      16. name: redhat-operators
      17. spec:
      18. displayName: Red Hat Operators Catalog
      19. image: registry.example.com:5000/olm/redhat-operators:v4.12 (1)
      20. updateStrategy: (2)
      21. registryPoll:
      22. interval: 1h
      1The index image URL contains the desired Operator images. If the index images are always pushed to the same image name and tag, this change is not needed.
      2Set how frequently the Operator Lifecycle Manager (OLM) polls the index image for new Operator versions with the registryPoll.interval field. This change is not needed if a new index image tag is always pushed for y-stream and z-stream Operator updates. The registryPoll.interval field can be set to a shorter interval to expedite the update, however shorter intervals increase computational load. To counteract this, you can restore registryPoll.interval to the default value once the update is complete.
    2. This update generates one policy, du-upgrade-operator-catsrc-policy, to update the redhat-operators catalog source with the new index images that contain the desired Operators images.

      If you want to use the image pre-caching for Operators and there are Operators from a different catalog source other than redhat-operators, you must perform the following tasks:

      • Prepare a separate catalog source policy with the new index image or registry poll interval update for the different catalog source.

      • Prepare a separate subscription policy for the desired Operators that are from the different catalog source.

      For example, the desired SRIOV-FEC Operator is available in the certified-operators catalog source. To update the catalog source and the Operator subscription, add the following contents to generate two policies, du-upgrade-fec-catsrc-policy and du-upgrade-subscriptions-fec-policy:

      1. apiVersion: ran.openshift.io/v1
      2. kind: PolicyGenTemplate
      3. metadata:
      4. name: "du-upgrade"
      5. namespace: "ztp-group-du-sno"
      6. spec:
      7. bindingRules:
      8. group-du-sno: ""
      9. mcp: "master"
      10. remediationAction: inform
      11. sourceFiles:
      12. - fileName: DefaultCatsrc.yaml
      13. remediationAction: inform
      14. policyName: "fec-catsrc-policy"
      15. metadata:
      16. name: certified-operators
      17. spec:
      18. displayName: Intel SRIOV-FEC Operator
      19. image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10
      20. updateStrategy:
      21. registryPoll:
      22. interval: 10m
      23. - fileName: AcceleratorsSubscription.yaml
      24. policyName: "subscriptions-fec-policy"
      25. spec:
      26. channel: "stable"
      27. source: certified-operators
    3. Remove the specified subscriptions channels in the common PolicyGenTemplate CR, if they exist. The default subscriptions channels from the ZTP image are used for the update.

      The default channel for the Operators applied through ZTP 4.12 is stable, except for the performance-addon-operator. As of OKD 4.11, the performance-addon-operator functionality was moved to the node-tuning-operator. For the 4.10 release, the default channel for PAO is v4.10. You can also specify the default channels in the common PolicyGenTemplate CR.

    4. Push the PolicyGenTemplate CRs updates to the ZTP Git repository.

      ArgoCD pulls the changes from the Git repository and generates the policies on the hub cluster.

    5. Check the created policies by running the following command:

      1. $ oc get policies -A | grep -E "catsrc-policy|subscription"
  2. Apply the required catalog source updates before starting the Operator update.

    1. Save the content of the ClusterGroupUpgrade CR named operator-upgrade-prep with the catalog source policies and the target managed clusters to the cgu-operator-upgrade-prep.yml file:

      1. apiVersion: ran.openshift.io/v1alpha1
      2. kind: ClusterGroupUpgrade
      3. metadata:
      4. name: cgu-operator-upgrade-prep
      5. namespace: default
      6. spec:
      7. clusters:
      8. - spoke1
      9. enable: true
      10. managedPolicies:
      11. - du-upgrade-operator-catsrc-policy
      12. remediationStrategy:
      13. maxConcurrency: 1
    2. Apply the policy to the hub cluster by running the following command:

      1. $ oc apply -f cgu-operator-upgrade-prep.yml
    3. Monitor the update process. Upon completion, ensure that the policy is compliant by running the following command:

      1. $ oc get policies -A | grep -E "catsrc-policy"
  3. Create the ClusterGroupUpgrade CR for the Operator update with the spec.enable field set to false.

    1. Save the content of the Operator update ClusterGroupUpgrade CR with the du-upgrade-operator-catsrc-policy policy and the subscription policies created from the common PolicyGenTemplate and the target clusters to the cgu-operator-upgrade.yml file, as shown in the following example:

      1. apiVersion: ran.openshift.io/v1alpha1
      2. kind: ClusterGroupUpgrade
      3. metadata:
      4. name: cgu-operator-upgrade
      5. namespace: default
      6. spec:
      7. managedPolicies:
      8. - du-upgrade-operator-catsrc-policy (1)
      9. - common-subscriptions-policy (2)
      10. preCaching: false
      11. clusters:
      12. - spoke1
      13. remediationStrategy:
      14. maxConcurrency: 1
      15. enable: false
      1The policy is needed by the image pre-caching feature to retrieve the operator images from the catalog source.
      2The policy contains Operator subscriptions. If you have followed the structure and content of the reference PolicyGenTemplates, all Operator subscriptions are grouped into the common-subscriptions-policy policy.

      One ClusterGroupUpgrade CR can only pre-cache the images of the desired Operators defined in the subscription policy from one catalog source included in the ClusterGroupUpgrade CR. If the desired Operators are from different catalog sources, such as in the example of the SRIOV-FEC Operator, another ClusterGroupUpgrade CR must be created with du-upgrade-fec-catsrc-policy and du-upgrade-subscriptions-fec-policy policies for the SRIOV-FEC Operator images pre-caching and update.

    2. Apply the ClusterGroupUpgrade CR to the hub cluster by running the following command:

      1. $ oc apply -f cgu-operator-upgrade.yml
  4. Optional: Pre-cache the images for the Operator update.

    1. Before starting image pre-caching, verify the subscription policy is NonCompliant at this point by running the following command:

      1. $ oc get policy common-subscriptions-policy -n <policy_namespace>

      Example output

      1. NAME REMEDIATION ACTION COMPLIANCE STATE AGE
      2. common-subscriptions-policy inform NonCompliant 27d
    2. Enable pre-caching in the ClusterGroupUpgrade CR by running the following command:

      1. $ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \
      2. --patch '{"spec":{"preCaching": true}}' --type=merge
    3. Monitor the process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster:

      1. $ oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'
    4. Check if the pre-caching is completed before starting the update by running the following command:

      1. $ oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq

      Example output

      1. [
      2. {
      3. "lastTransitionTime": "2022-03-08T20:49:08.000Z",
      4. "message": "The ClusterGroupUpgrade CR is not enabled",
      5. "reason": "UpgradeNotStarted",
      6. "status": "False",
      7. "type": "Ready"
      8. },
      9. {
      10. "lastTransitionTime": "2022-03-08T20:55:30.000Z",
      11. "message": "Precaching is completed",
      12. "reason": "PrecachingCompleted",
      13. "status": "True",
      14. "type": "PrecachingDone"
      15. }
      16. ]
  5. Start the Operator update.

    1. Enable the cgu-operator-upgrade ClusterGroupUpgrade CR and disable pre-caching to start the Operator update by running the following command:

      1. $ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade \
      2. --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge
    2. Monitor the process. Upon completion, ensure that the policy is compliant by running the following command:

      1. $ oc get policies --all-namespaces

Additional resources

Performing a platform and an Operator update together

You can perform a platform and an Operator update at the same time.

Prerequisites

  • Install the Topology Aware Lifecycle Manager (TALM).

  • Update ZTP to the latest version.

  • Provision one or more managed clusters with ZTP.

  • Log in as a user with cluster-admin privileges.

  • Create RHACM policies in the hub cluster.

Procedure

  1. Create the PolicyGenTemplate CR for the updates by following the steps described in the “Performing a platform update” and “Performing an Operator update” sections.

  2. Apply the prep work for the platform and the Operator update.

    1. Save the content of the ClusterGroupUpgrade CR with the policies for platform update preparation work, catalog source updates, and target clusters to the cgu-platform-operator-upgrade-prep.yml file, for example:

      1. apiVersion: ran.openshift.io/v1alpha1
      2. kind: ClusterGroupUpgrade
      3. metadata:
      4. name: cgu-platform-operator-upgrade-prep
      5. namespace: default
      6. spec:
      7. managedPolicies:
      8. - du-upgrade-platform-upgrade-prep
      9. - du-upgrade-operator-catsrc-policy
      10. clusterSelector:
      11. - group-du-sno
      12. remediationStrategy:
      13. maxConcurrency: 10
      14. enable: true
    2. Apply the cgu-platform-operator-upgrade-prep.yml file to the hub cluster by running the following command:

      1. $ oc apply -f cgu-platform-operator-upgrade-prep.yml
    3. Monitor the process. Upon completion, ensure that the policy is compliant by running the following command:

      1. $ oc get policies --all-namespaces
  3. Create the ClusterGroupUpdate CR for the platform and the Operator update with the spec.enable field set to false.

    1. Save the contents of the platform and Operator update ClusterGroupUpdate CR with the policies and the target clusters to the cgu-platform-operator-upgrade.yml file, as shown in the following example:

      1. apiVersion: ran.openshift.io/v1alpha1
      2. kind: ClusterGroupUpgrade
      3. metadata:
      4. name: cgu-du-upgrade
      5. namespace: default
      6. spec:
      7. managedPolicies:
      8. - du-upgrade-platform-upgrade (1)
      9. - du-upgrade-operator-catsrc-policy (2)
      10. - common-subscriptions-policy (3)
      11. preCaching: true
      12. clusterSelector:
      13. - group-du-sno
      14. remediationStrategy:
      15. maxConcurrency: 1
      16. enable: false
      1This is the platform update policy.
      2This is the policy containing the catalog source information for the Operators to be updated. It is needed for the pre-caching feature to determine which Operator images to download to the managed cluster.
      3This is the policy to update the Operators.
    2. Apply the cgu-platform-operator-upgrade.yml file to the hub cluster by running the following command:

      1. $ oc apply -f cgu-platform-operator-upgrade.yml
  4. Optional: Pre-cache the images for the platform and the Operator update.

    1. Enable pre-caching in the ClusterGroupUpgrade CR by running the following command:

      1. $ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \
      2. --patch '{"spec":{"preCaching": true}}' --type=merge
    2. Monitor the update process and wait for the pre-caching to complete. Check the status of pre-caching by running the following command on the managed cluster:

      1. $ oc get jobs,pods -n openshift-talm-pre-cache
    3. Check if the pre-caching is completed before starting the update by running the following command:

      1. $ oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'
  5. Start the platform and Operator update.

    1. Enable the cgu-du-upgrade ClusterGroupUpgrade CR to start the platform and the Operator update by running the following command:

      1. $ oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade \
      2. --patch '{"spec":{"enable":true, "preCaching": false}}' --type=merge
    2. Monitor the process. Upon completion, ensure that the policy is compliant by running the following command:

      1. $ oc get policies --all-namespaces

      The CRs for the platform and Operator updates can be created from the beginning by configuring the setting to spec.enable: true. In this case, the update starts immediately after pre-caching completes and there is no need to manually enable the CR.

      Both pre-caching and the update create extra resources, such as policies, placement bindings, placement rules, managed cluster actions, and managed cluster view, to help complete the procedures. Setting the afterCompletion.deleteObjects field to true deletes all these resources after the updates complete.

Removing Performance Addon Operator subscriptions from deployed clusters

In earlier versions of OKD, the Performance Addon Operator provided automatic, low latency performance tuning for applications. In OKD 4.11 or later, these functions are part of the Node Tuning Operator.

Do not install the Performance Addon Operator on clusters running OKD 4.11 or later. If you upgrade to OKD 4.11 or later, the Node Tuning Operator automatically removes the Performance Addon Operator.

You need to remove any policies that create Performance Addon Operator subscriptions to prevent a re-installation of the Operator.

The reference DU profile includes the Performance Addon Operator in the PolicyGenTemplate CR common-ranGen.yaml. To remove the subscription from deployed managed clusters, you must update common-ranGen.yaml.

If you install Performance Addon Operator 4.10.3-5 or later on OKD 4.11 or later, the Performance Addon Operator detects the cluster version and automatically hibernates to avoid interfering with the Node Tuning Operator functions. However, to ensure best performance, remove the Performance Addon Operator from your OKD 4.11 clusters.

Prerequisites

  • Create a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for ArgoCD.

  • Update to OKD 4.11 or later.

  • Log in as a user with cluster-admin privileges.

Procedure

  1. Change the complianceType to mustnothave for the Performance Addon Operator namespace, Operator group, and subscription in the common-ranGen.yaml file.

    1. - fileName: PaoSubscriptionNS.yaml
    2. policyName: "subscriptions-policy"
    3. complianceType: mustnothave
    4. - fileName: PaoSubscriptionOperGroup.yaml
    5. policyName: "subscriptions-policy"
    6. complianceType: mustnothave
    7. - fileName: PaoSubscription.yaml
    8. policyName: "subscriptions-policy"
    9. complianceType: mustnothave
  2. Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The status of the common-subscriptions-policy policy changes to Non-Compliant.

  3. Apply the change to your target clusters by using the Topology Aware Lifecycle Manager. For more information about rolling out configuration changes, see the “Additional resources” section.

  4. Monitor the process. When the status of the common-subscriptions-policy policy for a target cluster is Compliant, the Performance Addon Operator has been removed from the cluster. Get the status of the common-subscriptions-policy by running the following command:

    1. $ oc get policy -n ztp-common common-subscriptions-policy
  5. Delete the Performance Addon Operator namespace, Operator group and subscription CRs from .spec.sourceFiles in the common-ranGen.yaml file.

  6. Merge the changes with your custom site repository and wait for the ArgoCD application to synchronize the change to the hub cluster. The policy remains compliant.

About the auto-created ClusterGroupUpgrade CR for ZTP

TALM has a controller called ManagedClusterForCGU that monitors the Ready state of the ManagedCluster CRs on the hub cluster and creates the ClusterGroupUpgrade CRs for ZTP (zero touch provisioning).

For any managed cluster in the Ready state without a “ztp-done” label applied, the ManagedClusterForCGU controller automatically creates a ClusterGroupUpgrade CR in the ztp-install namespace with its associated RHACM policies that are created during the ZTP process. TALM then remediates the set of configuration policies that are listed in the auto-created ClusterGroupUpgrade CR to push the configuration CRs to the managed cluster.

If the managed cluster has no bound policies when the cluster becomes Ready, no ClusterGroupUpgrade CR is created.

Example of an auto-created ClusterGroupUpgrade CR for ZTP

  1. apiVersion: ran.openshift.io/v1alpha1
  2. kind: ClusterGroupUpgrade
  3. metadata:
  4. generation: 1
  5. name: spoke1
  6. namespace: ztp-install
  7. ownerReferences:
  8. - apiVersion: cluster.open-cluster-management.io/v1
  9. blockOwnerDeletion: true
  10. controller: true
  11. kind: ManagedCluster
  12. name: spoke1
  13. uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5
  14. resourceVersion: "46666836"
  15. uid: b8be9cd2-764f-4a62-87d6-6b767852c7da
  16. spec:
  17. actions:
  18. afterCompletion:
  19. addClusterLabels:
  20. ztp-done: "" (1)
  21. deleteClusterLabels:
  22. ztp-running: ""
  23. deleteObjects: true
  24. beforeEnable:
  25. addClusterLabels:
  26. ztp-running: "" (2)
  27. clusters:
  28. - spoke1
  29. enable: true
  30. managedPolicies:
  31. - common-spoke1-config-policy
  32. - common-spoke1-subscriptions-policy
  33. - group-spoke1-config-policy
  34. - spoke1-config-policy
  35. - group-spoke1-validator-du-policy
  36. preCaching: false
  37. remediationStrategy:
  38. maxConcurrency: 1
  39. timeout: 240
1Applied to the managed cluster when TALM completes the cluster configuration.
2Applied to the managed cluster when TALM starts deploying the configuration policies.