Preparing the hub cluster for ZTP

To use RHACM in a disconnected environment, create a mirror registry that mirrors the OKD release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the FCOS ISO and RootFS disk images that are used to provision the bare-metal hosts.

Telco RAN 4.12 validated solution software versions

The Red Hat Telco Radio Access Network (RAN) version 4.12 solution has been validated using the following Red Hat software products.

Table 1. Telco RAN 4.12 validated solution software
ProductSoftware version

Hub cluster OKD version

4.12

GitOps ZTP plugin

4.10, 4.11, or 4.12

Red Hat Advanced Cluster Management (RHACM)

2.6

Red Hat OpenShift GitOps

1.6

Topology Aware Lifecycle Manager (TALM)

4.11 or 4.12

Installing GitOps ZTP in a disconnected environment

Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters.

Prerequisites

  • You have installed the OKD CLI (oc).

  • You have logged in as a user with cluster-admin privileges.

  • You have configured a disconnected mirror registry for use in the cluster.

    The disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry.

Procedure

Additional resources

Adding FCOS ISO and RootFS images to the disconnected mirror host

Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Fedora CoreOS (FCOS) images for it to use. Use a disconnected mirror to host the FCOS images.

Prerequisites

  • Deploy and configure an HTTP server to host the FCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create.

The FCOS images might not change with every release of OKD. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OKD version if they are available. You require ISO and RootFS images to install FCOS on the hosts. FCOS QCOW2 images are not supported for this installation type.

Procedure

  1. Log in to the mirror host.

  2. Obtain the FCOS ISO and RootFS images from mirror.openshift.com, for example:

    1. Export the required image names and OKD version as environment variables:

      1. $ export ISO_IMAGE_NAME=<iso_image_name> (1)
      1. $ export ROOTFS_IMAGE_NAME=<rootfs_image_name> (2)
      1. $ export OCP_VERSION=<ocp_version> (3)
      1ISO image name, for example, rhcos-4.12.1-x86_64-live.x86_64.iso
      2RootFS image name, for example, rhcos-4.12.1-x86_64-live-rootfs.x86_64.img
      3OKD version, for example, 4.12.1
    2. Download the required images:

      1. $ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/${OCP_VERSION}/${ISO_IMAGE_NAME} -O /var/www/html/${ISO_IMAGE_NAME}
      1. $ sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.12/${OCP_VERSION}/${ROOTFS_IMAGE_NAME} -O /var/www/html/${ROOTFS_IMAGE_NAME}

Verification steps

  • Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example:

    1. $ wget http://$(hostname)/${ISO_IMAGE_NAME}

    Example output

    1. Saving to: rhcos-4.12.1-x86_64-live.x86_64.iso
    2. rhcos-4.12.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s

Additional resources

Enabling the assisted service and updating AgentServiceConfig on the hub cluster

Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OKD clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator with Central Infrastructure Management (CIM). When you have enabled CIM on the hub cluster, you then need to update the AgentServiceConfig custom resource (CR) with references to the ISO and RootFS images that are hosted on the mirror registry HTTP server.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • You have logged in to the hub cluster as a user with cluster-admin privileges.

  • You have enabled the assisted service on the hub cluster. For more information, see Enabling CIM.

Procedure

  1. Update the AgentServiceConfig CR by running the following command:

    1. $ oc edit AgentServiceConfig
  2. Add the following entry to the items.spec.osImages field in the CR:

    1. - cpuArchitecture: x86_64
    2. openshiftVersion: "4.12"
    3. rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img
    4. url: https://<mirror-registry>/<path>/rhcos-live.x86_64.iso

    where:

    <host>

    Is the fully qualified domain name (FQDN) for the target mirror registry HTTP server.

    <path>

    Is the path to the image on the target mirror registry.

    Save and quit the editor to apply the changes.

Configuring the hub cluster to use a disconnected mirror registry

You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment.

Prerequisites

  • You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.5 installed.

  • You have hosted the rootfs and iso images on an HTTP server.

If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OKD hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported.

Procedure

  1. Create a ConfigMap containing the mirror registry config:

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: assisted-installer-mirror-config
    5. namespace: assisted-installer
    6. labels:
    7. app: assisted-service
    8. data:
    9. ca-bundle.crt: <certificate> (1)
    10. registries.conf: | (2)
    11. unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
    12. [[registry]]
    13. location = <mirror_registry_url> (3)
    14. insecure = false
    15. mirror-by-digest-only = true
    1The mirror registry’s certificate used when creating the mirror registry.
    2The configuration for the mirror registry.
    3The URL of the mirror registry.

    This updates mirrorRegistryRef in the AgentServiceConfig custom resource, as shown below:

    Example output

    1. apiVersion: agent-install.openshift.io/v1beta1
    2. kind: AgentServiceConfig
    3. metadata:
    4. name: agent
    5. namespace: assisted-installer
    6. spec:
    7. databaseStorage:
    8. volumeName: <db_pv_name>
    9. accessModes:
    10. - ReadWriteOnce
    11. resources:
    12. requests:
    13. storage: <db_storage_size>
    14. filesystemStorage:
    15. volumeName: <fs_pv_name>
    16. accessModes:
    17. - ReadWriteOnce
    18. resources:
    19. requests:
    20. storage: <fs_storage_size>
    21. mirrorRegistryRef:
    22. name: 'assisted-installer-mirror-config'
    23. osImages:
    24. - openshiftVersion: <ocp_version>
    25. rootfs: <rootfs_url> (1)
    26. url: <iso_url> (1)
    1Must match the URLs of the HTTPD server.

A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network.

Configuring the hub cluster with ArgoCD

You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps zero touch provisioning (ZTP).

Red Hat Advanced Cluster Management (RHACM) uses SiteConfig CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 SiteConfig CRs.

Prerequisites

  • You have a OKD hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed.

  • You have extracted the reference deployment from the ZTP GitOps plugin container as described in the “Preparing the GitOps ZTP site configuration repository” section. Extracting the reference deployment creates the out/argocd/deployment directory referenced in the following procedure.

Procedure

  1. Prepare the ArgoCD pipeline configuration:

    1. Create a Git repository with the directory structure similar to the example directory. For more information, see “Preparing the GitOps ZTP site configuration repository”.

    2. Configure access to the repository using the ArgoCD UI. Under Settings configure the following:

      • Repositories - Add the connection information. The URL must end in .git, for example, [https://repo.example.com/repo.git](https://repo.example.com/repo.git) and credentials.

      • Certificates - Add the public certificate for the repository, if needed.

    3. Modify the two ArgoCD applications, out/argocd/deployment/clusters-app.yaml and out/argocd/deployment/policies-app.yaml, based on your Git repository:

      • Update the URL to point to the Git repository. The URL ends with .git, for example, [https://repo.example.com/repo.git](https://repo.example.com/repo.git).

      • The targetRevision indicates which Git repository branch to monitor.

      • path specifies the path to the SiteConfig and PolicyGenTemplate CRs, respectively.

  1. To install the ZTP GitOps plugin you must patch the ArgoCD instance in the hub cluster by using the patch file previously extracted into the out/argocd/deployment/ directory. Run the following command:

    1. $ oc patch argocd openshift-gitops \
    2. -n openshift-gitops --type=merge \
    3. --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json
  2. Apply the pipeline configuration to your hub cluster by using the following command:

    1. $ oc apply -k out/argocd/deployment

Additional resources

Preparing the GitOps ZTP site configuration repository

Before you can use the ZTP GitOps pipeline, you need to prepare the Git repository to host the site configuration data.

Prerequisites

  • You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs).

  • You have deployed the managed clusters using zero touch provisioning (ZTP).

Procedure

  1. Create a directory structure with separate paths for the SiteConfig and PolicyGenTemplate CRs.

  2. Export the argocd directory from the ztp-site-generate container image using the following commands:

    1. $ podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12
    1. $ mkdir -p ./out
    1. $ podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.1 extract /home/ztp --tar | tar x -C ./out
  3. Check that the out directory contains the following subdirectories:

    • out/extra-manifest contains the source CR files that SiteConfig uses to generate extra manifest configMap.

    • out/source-crs contains the source CR files that PolicyGenTemplate uses to generate the Red Hat Advanced Cluster Management (RHACM) policies.

    • out/argocd/deployment contains patches and YAML files to apply on the hub cluster for use in the next step of this procedure.

    • out/argocd/example contains the examples for SiteConfig and PolicyGenTemplate files that represent the recommended configuration.

The directory structure under out/argocd/example serves as a reference for the structure and content of your Git repository. The example includes SiteConfig and PolicyGenTemplate reference CRs for single-node, three-node, and standard clusters. Remove references to cluster types that you are not using. The following example describes a set of CRs for a network of single-node clusters:

  1. example
  2. ├── policygentemplates
  3. ├── common-ranGen.yaml
  4. ├── example-sno-site.yaml
  5. ├── group-du-sno-ranGen.yaml
  6. ├── group-du-sno-validator-ranGen.yaml
  7. ├── kustomization.yaml
  8. └── ns.yaml
  9. └── siteconfig
  10. ├── example-sno.yaml
  11. ├── KlusterletAddonConfigOverride.yaml
  12. └── kustomization.yaml

Keep SiteConfig and PolicyGenTemplate CRs in separate directories. Both the SiteConfig and PolicyGenTemplate directories must contain a kustomization.yaml file that explicitly includes the files in that directory.

This directory structure and the kustomization.yaml files must be committed and pushed to your Git repository. The initial push to Git should include the kustomization.yaml files. The SiteConfig (example-sno.yaml) and PolicyGenTemplate (common-ranGen.yaml, group-du-sno*.yaml, and example-sno-site.yaml) files can be omitted and pushed at a later time as required when deploying a site.

The KlusterletAddonConfigOverride.yaml file is only required if one or more SiteConfig CRs which make reference to it are committed and pushed to Git. See example-sno.yaml for an example of how this is used.