Pre-caching images for single-node OpenShift deployments

In environments with limited bandwidth where you use the GitOps Zero Touch Provisioning (ZTP) solution to deploy a large number of clusters, you want to avoid downloading all the images that are required for bootstrapping and installing OKD. The limited bandwidth at remote single-node OpenShift sites can cause long deployment times. The factory-precaching-cli tool allows you to pre-stage servers before shipping them to the remote site for ZTP provisioning.

The factory-precaching-cli tool does the following:

  • Downloads the RHCOS rootfs image that is required by the minimal ISO to boot.

  • Creates a partition from the installation disk labelled as data.

  • Formats the disk in xfs.

  • Creates a GUID Partition Table (GPT) data partition at the end of the disk, where the size of the partition is configurable by the tool.

  • Copies the container images required to install OKD.

  • Copies the container images required by ZTP to install OKD.

  • Optional: Copies Day-2 Operators to the partition.

The factory-precaching-cli tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Getting the factory-precaching-cli tool

The factory-precaching-cli tool Go binary is publicly available in the {rds-first} tools container image. The factory-precaching-cli tool Go binary in the container image is executed on the server running an FCOS live image using podman. If you are working in a disconnected environment or have a private registry, you need to copy the image there so you can download the image to the server.

Procedure

  • Pull the factory-precaching-cli tool image by running the following command:

    1. # podman pull quay.io/openshift-kni/telco-ran-tools:latest

Verification

  • To check that the tool is available, query the current version of the factory-precaching-cli tool Go binary:

    1. # podman run quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli -v

    Example output

    1. factory-precaching-cli version 20221018.120852+main.feecf17

Booting from a live operating system image

You can use the factory-precaching-cli tool with to boot servers where only one disk is available and external disk drive cannot be attached to the server.

FCOS requires the disk to not be in use when the disk is about to be written with an FCOS image.

Depending on the server hardware, you can mount the FCOS live ISO on the blank server using one of the following methods:

  • Using the Dell RACADM tool on a Dell server.

  • Using the HPONCFG tool on a HP server.

  • Using the Redfish BMC API.

It is recommended to automate the mounting procedure. To automate the procedure, you need to pull the required images and host them on a local HTTP server.

Prerequisites

  • You powered up the host.

  • You have network connectivity to the host.

Procedure

This example procedure uses the Redfish BMC API to mount the FCOS live ISO.

  1. Mount the FCOS live ISO:

    1. Check virtual media status:

      1. $ curl --globoff -H "Content-Type: application/json" -H \
      2. "Accept: application/json" -k -X GET --user ${username_password} \
      3. https://$BMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1 | python -m json.tool
    2. Mount the ISO file as a virtual media:

      1. $ curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku ${username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Image": "http://[$HTTPd_IP]/RHCOS-live.iso"}' -X POST https://$BMC_ADDRESS/redfish/v1/Managers/Self/VirtualMedia/1/Actions/VirtualMedia.InsertMedia
    3. Set the boot order to boot from the virtual media once:

      1. $ curl --globoff -L -w "%{http_code} %{url_effective}\\n" -ku ${username_password} -H "Content-Type: application/json" -H "Accept: application/json" -d '{"Boot":{ "BootSourceOverrideEnabled": "Once", "BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI"}}' -X PATCH https://$BMC_ADDRESS/redfish/v1/Systems/Self
  2. Reboot and ensure that the server is booting from virtual media.

Additional resources

Partitioning the disk

To run the full pre-caching process, you have to boot from a live ISO and use the factory-precaching-cli tool from a container image to partition and pre-cache all the artifacts required.

A live ISO or FCOS live ISO is required because the disk must not be in use when the operating system (FCOS) is written to the device during the provisioning. Single-disk servers can also be enabled with this procedure.

Prerequisites

  • You have a disk that is not partitioned.

  • You have access to the quay.io/openshift-kni/telco-ran-tools:latest image.

  • You have enough storage to install OKD and pre-cache the required images.

Procedure

  1. Verify that the disk is cleared:

    1. # lsblk

    Example output

    1. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    2. loop0 7:0 0 93.8G 0 loop /run/ephemeral
    3. loop1 7:1 0 897.3M 1 loop /sysroot
    4. sr0 11:0 1 999M 0 rom /run/media/iso
    5. nvme0n1 259:1 0 1.5T 0 disk
  2. Erase any file system, RAID or partition table signatures from the device:

    1. # wipefs -a /dev/nvme0n1

    Example output

    1. /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
    2. /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54
    3. /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa

The tool fails if the disk is not empty because it uses partition number 1 of the device for pre-caching the artifacts.

Creating the partition

Once the device is ready, you create a single partition and a GPT partition table. The partition is automatically labelled as data and created at the end of the device. Otherwise, the partition will be overridden by the coreos-installer.

The coreos-installer requires the partition to be created at the end of the device and to be labelled as data. Both requirements are necessary to save the partition when writing the FCOS image to the disk.

Prerequisites

  • The container must run as privileged due to formatting host devices.

  • You have to mount the /dev folder so that the process can be executed inside the container.

Procedure

In the following example, the size of the partition is 250 GiB due to allow pre-caching the DU profile for Day 2 Operators.

  1. Run the container as privileged and partition the disk:

    1. # podman run -v /dev:/dev --privileged \
    2. --rm quay.io/openshift-kni/telco-ran-tools:latest -- \
    3. factory-precaching-cli partition \ (1)
    4. -d /dev/nvme0n1 \ (2)
    5. -s 250 (3)
    1Specifies the partitioning function of the factory-precaching-cli tool.
    2Defines the root directory on the disk.
    3Defines the size of the disk in GB.
  2. Check the storage information:

    1. # lsblk

    Example output

    1. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    2. loop0 7:0 0 93.8G 0 loop /run/ephemeral
    3. loop1 7:1 0 897.3M 1 loop /sysroot
    4. sr0 11:0 1 999M 0 rom /run/media/iso
    5. nvme0n1 259:1 0 1.5T 0 disk
    6. └─nvme0n1p1 259:3 0 250G 0 part

Verification

You must verify that the following requirements are met:

  • The device has a GPT partition table

  • The partition uses the latest sectors of the device.

  • The partition is correctly labeled as data.

Query the disk status to verify that the disk is partitioned as expected:

  1. # gdisk -l /dev/nvme0n1

Example output

  1. GPT fdisk (gdisk) version 1.0.3
  2. Partition table scan:
  3. MBR: protective
  4. BSD: not present
  5. APM: not present
  6. GPT: present
  7. Found valid GPT with protective MBR; using GPT.
  8. Disk /dev/nvme0n1: 3125627568 sectors, 1.5 TiB
  9. Model: Dell Express Flash PM1725b 1.6TB SFF
  10. Sector size (logical/physical): 512/512 bytes
  11. Disk identifier (GUID): CB5A9D44-9B3C-4174-A5C1-C64957910B61
  12. Partition table holds up to 128 entries
  13. Main partition table begins at sector 2 and ends at sector 33
  14. First usable sector is 34, last usable sector is 3125627534
  15. Partitions will be aligned on 2048-sector boundaries
  16. Total free space is 2601338846 sectors (1.2 TiB)
  17. Number Start (sector) End (sector) Size Code Name
  18. 1 2601338880 3125627534 250.0 GiB 8300 data

Mounting the partition

After verifying that the disk is partitioned correctly, you can mount the device into /mnt.

It is recommended to mount the device into /mnt because that mounting point is used during GitOps ZTP preparation.

  1. Verify that the partition is formatted as xfs:

    1. # lsblk -f /dev/nvme0n1

    Example output

    1. NAME FSTYPE LABEL UUID MOUNTPOINT
    2. nvme0n1
    3. └─nvme0n1p1 xfs 1bee8ea4-d6cf-4339-b690-a76594794071
  2. Mount the partition:

    1. # mount /dev/nvme0n1p1 /mnt/

Verification

  • Check that the partition is mounted:

    1. # lsblk

    Example output

    1. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    2. loop0 7:0 0 93.8G 0 loop /run/ephemeral
    3. loop1 7:1 0 897.3M 1 loop /sysroot
    4. sr0 11:0 1 999M 0 rom /run/media/iso
    5. nvme0n1 259:1 0 1.5T 0 disk
    6. └─nvme0n1p1 259:2 0 250G 0 part /var/mnt (1)
    1The mount point is /var/mnt because the /mnt folder in FCOS is a link to /var/mnt.

Downloading the images

The factory-precaching-cli tool allows you to download the following images to your partitioned server:

  • OKD images

  • Operator images that are included in the distributed unit (DU) profile for 5G RAN sites

  • Operator images from disconnected registries

The list of available Operator images can vary in different OKD releases.

Downloading with parallel workers

The factory-precaching-cli tool uses parallel workers to download multiple images simultaneously. You can configure the number of workers with the --parallel or -p option. The default number is set to 80% of the available CPUs to the server.

Your login shell may be restricted to a subset of CPUs, which reduces the CPUs available to the container. To remove this restriction, you can precede your commands with taskset 0xffffffff, for example:

  1. # taskset 0xffffffff podman run —rm quay.io/openshift-kni/telco-ran-tools:latest factory-precaching-cli download —help

Preparing to download the OKD images

To download OKD container images, you need to know the multicluster engine (MCE) version. When you use the --du-profile flag, you also need to specify the Red Hat Advanced Cluster Management (RHACM) version running in the hub cluster that is going to provision the single-node OpenShift.

Prerequisites

  • You have RHACM and MCE installed.

  • You partitioned the storage device.

  • You have enough space for the images on the partitioned device.

  • You connected the bare-metal server to the Internet.

  • You have a valid pull secret.

Procedure

  1. Check the RHACM and MCE version by running the following commands in the hub cluster:

    1. $ oc get csv -A | grep -i advanced-cluster-management

    Example output

    1. open-cluster-management advanced-cluster-management.v2.6.3 Advanced Cluster Management for Kubernetes 2.6.3 advanced-cluster-management.v2.6.3 Succeeded
    1. $ oc get csv -A | grep -i multicluster-engine

    Example output

    1. multicluster-engine cluster-group-upgrades-operator.v0.0.3 cluster-group-upgrades-operator 0.0.3 Pending
    2. multicluster-engine multicluster-engine.v2.1.4 multicluster engine for Kubernetes 2.1.4 multicluster-engine.v2.0.3 Succeeded
    3. multicluster-engine openshift-gitops-operator.v1.5.7 Red Hat OpenShift GitOps 1.5.7 openshift-gitops-operator.v1.5.6-0.1664915551.p Succeeded
    4. multicluster-engine openshift-pipelines-operator-rh.v1.6.4 Red Hat OpenShift Pipelines 1.6.4 openshift-pipelines-operator-rh.v1.6.3 Succeeded
  2. To access the container registry, copy a valid pull secret on the server to be installed:

    1. Create the .docker folder:

      1. $ mkdir /root/.docker
    2. Copy the valid pull in the config.json file to the previously created .docker/ folder:

      1. $ cp config.json /root/.docker/config.json (1)
      1/root/.docker/config.json is the default path where podman checks for the login credentials for the registry.

If you use a different registry to pull the required artifacts, you need to copy the proper pull secret. If the local registry uses TLS, you need to include the certificates from the registry as well.

Downloading the OKD images

The factory-precaching-cli tool allows you to pre-cache all the container images required to provision a specific OKD release.

Procedure

  • Pre-cache the release by running the following command:

    1. # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools -- \
    2. factory-precaching-cli download \ (1)
    3. -r 4.12.0 \ (2)
    4. --acm-version 2.6.3 \ (3)
    5. --mce-version 2.1.4 \ (4)
    6. -f /mnt \ (5)
    7. --img quay.io/custom/repository (6)
    1Specifies the downloading function of the factory-precaching-cli tool.
    2Defines the OKD release version.
    3Defines the RHACM version.
    4Defines the MCE version.
    5Defines the folder where you want to download the images on the disk.
    6Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk.

    Example output

    1. Generated /mnt/imageset.yaml
    2. Generating list of pre-cached artifacts...
    3. Processing artifact [1/176]: ocp-v4.0-art-dev@sha256_6ac2b96bf4899c01a87366fd0feae9f57b1b61878e3b5823da0c3f34f707fbf5
    4. Processing artifact [2/176]: ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c
    5. Processing artifact [3/176]: ocp-v4.0-art-dev@sha256_a480390e91b1c07e10091c3da2257180654f6b2a735a4ad4c3b69dbdb77bbc06
    6. Processing artifact [4/176]: ocp-v4.0-art-dev@sha256_ecc5d8dbd77e326dba6594ff8c2d091eefbc4d90c963a9a85b0b2f0e6155f995
    7. Processing artifact [5/176]: ocp-v4.0-art-dev@sha256_274b6d561558a2f54db08ea96df9892315bb773fc203b1dbcea418d20f4c7ad1
    8. Processing artifact [6/176]: ocp-v4.0-art-dev@sha256_e142bf5020f5ca0d1bdda0026bf97f89b72d21a97c9cc2dc71bf85050e822bbf
    9. ...
    10. Processing artifact [175/176]: ocp-v4.0-art-dev@sha256_16cd7eda26f0fb0fc965a589e1e96ff8577e560fcd14f06b5fda1643036ed6c8
    11. Processing artifact [176/176]: ocp-v4.0-art-dev@sha256_cf4d862b4a4170d4f611b39d06c31c97658e309724f9788e155999ae51e7188f
    12. ...
    13. Summary:
    14. Release: 4.12.0
    15. Hub Version: 2.6.3
    16. ACM Version: 2.6.3
    17. MCE Version: 2.1.4
    18. Include DU Profile: No
    19. Workers: 83

Verification

  • Check that all the images are compressed in the target folder of server:

    1. $ ls -l /mnt (1)
    1It is recommended that you pre-cache the images in the /mnt folder.

    Example output

    1. -rw-r--r--. 1 root root 136352323 Oct 31 15:19 ocp-v4.0-art-dev@sha256_edec37e7cd8b1611d0031d45e7958361c65e2005f145b471a8108f1b54316c07.tgz
    2. -rw-r--r--. 1 root root 156092894 Oct 31 15:33 ocp-v4.0-art-dev@sha256_ee51b062b9c3c9f4fe77bd5b3cc9a3b12355d040119a1434425a824f137c61a9.tgz
    3. -rw-r--r--. 1 root root 172297800 Oct 31 15:29 ocp-v4.0-art-dev@sha256_ef23d9057c367a36e4a5c4877d23ee097a731e1186ed28a26c8d21501cd82718.tgz
    4. -rw-r--r--. 1 root root 171539614 Oct 31 15:23 ocp-v4.0-art-dev@sha256_f0497bb63ef6834a619d4208be9da459510df697596b891c0c633da144dbb025.tgz
    5. -rw-r--r--. 1 root root 160399150 Oct 31 15:20 ocp-v4.0-art-dev@sha256_f0c339da117cde44c9aae8d0bd054bceb6f19fdb191928f6912a703182330ac2.tgz
    6. -rw-r--r--. 1 root root 175962005 Oct 31 15:17 ocp-v4.0-art-dev@sha256_f19dd2e80fb41ef31d62bb8c08b339c50d193fdb10fc39cc15b353cbbfeb9b24.tgz
    7. -rw-r--r--. 1 root root 174942008 Oct 31 15:33 ocp-v4.0-art-dev@sha256_f1dbb81fa1aa724e96dd2b296b855ff52a565fbef003d08030d63590ae6454df.tgz
    8. -rw-r--r--. 1 root root 246693315 Oct 31 15:31 ocp-v4.0-art-dev@sha256_f44dcf2c94e4fd843cbbf9b11128df2ba856cd813786e42e3da1fdfb0f6ddd01.tgz
    9. -rw-r--r--. 1 root root 170148293 Oct 31 15:00 ocp-v4.0-art-dev@sha256_f48b68d5960ba903a0d018a10544ae08db5802e21c2fa5615a14fc58b1c1657c.tgz
    10. -rw-r--r--. 1 root root 168899617 Oct 31 15:16 ocp-v4.0-art-dev@sha256_f5099b0989120a8d08a963601214b5c5cb23417a707a8624b7eb52ab788a7f75.tgz
    11. -rw-r--r--. 1 root root 176592362 Oct 31 15:05 ocp-v4.0-art-dev@sha256_f68c0e6f5e17b0b0f7ab2d4c39559ea89f900751e64b97cb42311a478338d9c3.tgz
    12. -rw-r--r--. 1 root root 157937478 Oct 31 15:37 ocp-v4.0-art-dev@sha256_f7ba33a6a9db9cfc4b0ab0f368569e19b9fa08f4c01a0d5f6a243d61ab781bd8.tgz
    13. -rw-r--r--. 1 root root 145535253 Oct 31 15:26 ocp-v4.0-art-dev@sha256_f8f098911d670287826e9499806553f7a1dd3e2b5332abbec740008c36e84de5.tgz
    14. -rw-r--r--. 1 root root 158048761 Oct 31 15:40 ocp-v4.0-art-dev@sha256_f914228ddbb99120986262168a705903a9f49724ffa958bb4bf12b2ec1d7fb47.tgz
    15. -rw-r--r--. 1 root root 167914526 Oct 31 15:37 ocp-v4.0-art-dev@sha256_fa3ca9401c7a9efda0502240aeb8d3ae2d239d38890454f17fe5158b62305010.tgz
    16. -rw-r--r--. 1 root root 164432422 Oct 31 15:24 ocp-v4.0-art-dev@sha256_fc4783b446c70df30b3120685254b40ce13ba6a2b0bf8fb1645f116cf6a392f1.tgz
    17. -rw-r--r--. 1 root root 306643814 Oct 31 15:11 troubleshoot@sha256_b86b8aea29a818a9c22944fd18243fa0347c7a2bf1ad8864113ff2bb2d8e0726.tgz

Downloading the Operator images

You can also pre-cache Day-2 Operators used in the 5G Radio Access Network (RAN) Distributed Unit (DU) cluster configuration. The Day-2 Operators depend on the installed OKD version.

You need to include the RHACM hub and MCE Operator versions by using the —acm-version and —mce-version flags so the factory-precaching-cli tool can pre-cache the appropriate containers images for the RHACM and MCE Operators.

Procedure

  • Pre-cache the Operator images:

    1. # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ (1)
    2. -r 4.12.0 \ (2)
    3. --acm-version 2.6.3 \ (3)
    4. --mce-version 2.1.4 \ (4)
    5. -f /mnt \ (5)
    6. --img quay.io/custom/repository (6)
    7. --du-profile -s (7)
    1Specifies the downloading function of the factory-precaching-cli tool.
    2Defines the OKD release version.
    3Defines the RHACM version.
    4Defines the MCE version.
    5Defines the folder where you want to download the images on the disk.
    6Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk.
    7Specifies pre-caching the Operators included in the DU configuration.

    Example output

    1. Generated /mnt/imageset.yaml
    2. Generating list of pre-cached artifacts...
    3. Processing artifact [1/379]: ocp-v4.0-art-dev@sha256_7753a8d9dd5974be8c90649aadd7c914a3d8a1f1e016774c7ac7c9422e9f9958
    4. Processing artifact [2/379]: ose-kube-rbac-proxy@sha256_c27a7c01e5968aff16b6bb6670423f992d1a1de1a16e7e260d12908d3322431c
    5. Processing artifact [3/379]: ocp-v4.0-art-dev@sha256_370e47a14c798ca3f8707a38b28cfc28114f492bb35fe1112e55d1eb51022c99
    6. ...
    7. Processing artifact [378/379]: ose-local-storage-operator@sha256_0c81c2b79f79307305e51ce9d3837657cf9ba5866194e464b4d1b299f85034d0
    8. Processing artifact [379/379]: multicluster-operators-channel-rhel8@sha256_c10f6bbb84fe36e05816e873a72188018856ad6aac6cc16271a1b3966f73ceb3
    9. ...
    10. Summary:
    11. Release: 4.12.0
    12. Hub Version: 2.6.3
    13. ACM Version: 2.6.3
    14. MCE Version: 2.1.4
    15. Include DU Profile: Yes
    16. Workers: 83

Pre-caching custom images in disconnected environments

The --generate-imageset argument stops the factory-precaching-cli tool after the ImageSetConfiguration custom resource (CR) is generated. This allows you to customize the ImageSetConfiguration CR before downloading any images. After you customized the CR, you can use the --skip-imageset argument to download the images that you specified in the ImageSetConfiguration CR.

You can customize the ImageSetConfiguration CR in the following ways:

  • Add Operators and additional images

  • Remove Operators and additional images

  • Change Operator and catalog sources to local or disconnected registries

Procedure

  1. Pre-cache the images:

    1. # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download \ (1)
    2. -r 4.12.0 \ (2)
    3. --acm-version 2.6.3 \ (3)
    4. --mce-version 2.1.4 \ (4)
    5. -f /mnt \ (5)
    6. --img quay.io/custom/repository (6)
    7. --du-profile -s \ (7)
    8. --generate-imageset (8)
    1Specifies the downloading function of the factory-precaching-cli tool.
    2Defines the OKD release version.
    3Defines the RHACM version.
    4Defines the MCE version.
    5Defines the folder where you want to download the images on the disk.
    6Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk.
    7Specifies pre-caching the Operators included in the DU configuration.
    8The —generate-imageset argument generates the ImageSetConfiguration CR only, which allows you to customize the CR.

    Example output

    1. Generated /mnt/imageset.yaml

    Example ImageSetConfiguration CR

    1. apiVersion: mirror.openshift.io/v1alpha2
    2. kind: ImageSetConfiguration
    3. mirror:
    4. platform:
    5. channels:
    6. - name: stable-4.12
    7. minVersion: 4.12.0 (1)
    8. maxVersion: 4.12.0
    9. additionalImages:
    10. - name: quay.io/custom/repository
    11. operators:
    12. - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.12
    13. packages:
    14. - name: advanced-cluster-management (2)
    15. channels:
    16. - name: 'release-2.6'
    17. minVersion: 2.6.3
    18. maxVersion: 2.6.3
    19. - name: multicluster-engine (2)
    20. channels:
    21. - name: 'stable-2.1'
    22. minVersion: 2.1.4
    23. maxVersion: 2.1.4
    24. - name: local-storage-operator (3)
    25. channels:
    26. - name: 'stable'
    27. - name: ptp-operator (3)
    28. channels:
    29. - name: 'stable'
    30. - name: sriov-network-operator (3)
    31. channels:
    32. - name: 'stable'
    33. - name: cluster-logging (3)
    34. channels:
    35. - name: 'stable'
    36. - name: lvms-operator (3)
    37. channels:
    38. - name: 'stable-4.12'
    39. - name: amq7-interconnect-operator (3)
    40. channels:
    41. - name: '1.10.x'
    42. - name: bare-metal-event-relay (3)
    43. channels:
    44. - name: 'stable'
    45. - catalog: registry.redhat.io/redhat/certified-operator-index:v4.12
    46. packages:
    47. - name: sriov-fec (3)
    48. channels:
    49. - name: 'stable'
    1The platform versions match the versions passed to the tool.
    2The versions of RHACM and MCE Operators match the versions passed to the tool.
    3The CR contains all the specified DU Operators.
  2. Customize the catalog resource in the CR:

    1. apiVersion: mirror.openshift.io/v1alpha2
    2. kind: ImageSetConfiguration
    3. mirror:
    4. platform:
    5. [...]
    6. operators:
    7. - catalog: eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/certified-operator-index:v4.12
    8. packages:
    9. - name: sriov-fec
    10. channels:
    11. - name: 'stable'

    When you download images by using a local or disconnected registry, you have to first add certificates for the registries that you want to pull the content from.

  3. To avoid any errors, copy the registry certificate into your server:

    1. # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.
  4. Then, update the certificates trust store:

    1. # update-ca-trust
  5. Mount the host /etc/pki folder into the factory-cli image:

    1. # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- \
    2. factory-precaching-cli download \ (1)
    3. -r 4.12.0 \ (2)
    4. --acm-version 2.6.3 \ (3)
    5. --mce-version 2.1.4 \ (4)
    6. -f /mnt \ (5)
    7. --img quay.io/custom/repository (6)
    8. --du-profile -s \ (7)
    9. --skip-imageset (8)
    1Specifies the downloading function of the factory-precaching-cli tool.
    2Defines the OKD release version.
    3Defines the RHACM version.
    4Defines the MCE version.
    5Defines the folder where you want to download the images on the disk.
    6Optional. Defines the repository where you store your additional images. These images are downloaded and pre-cached on the disk.
    7Specifies pre-caching the Operators included in the DU configuration.
    8The —skip-imageset argument allows you to download the images that you specified in your customized ImageSetConfiguration CR.
  6. Download the images without generating a new imageSetConfiguration CR:

    1. # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker --privileged --rm quay.io/openshift-kni/telco-ran-tools:latest -- factory-precaching-cli download -r 4.12.0 \
    2. --acm-version 2.6.3 --mce-version 2.1.4 -f /mnt \
    3. --img quay.io/custom/repository \
    4. --du-profile -s \
    5. --skip-imageset

Additional resources

Pre-caching images in GitOps ZTP

The SiteConfig manifest defines how an OpenShift cluster is to be installed and configured. In the GitOps Zero Touch Provisioning (ZTP) provisioning workflow, the factory-precaching-cli tool requires the following additional fields in the SiteConfig manifest:

  • clusters.ignitionConfigOverride

  • nodes.installerArgs

  • nodes.ignitionConfigOverride

Example SiteConfig with additional fields

  1. apiVersion: ran.openshift.io/v1
  2. kind: SiteConfig
  3. metadata:
  4. name: "example-5g-lab"
  5. namespace: "example-5g-lab"
  6. spec:
  7. baseDomain: "example.domain.redhat.com"
  8. pullSecretRef:
  9. name: "assisted-deployment-pull-secret"
  10. clusterImageSetNameRef: "img4.9.10-x86-64-appsub"
  11. sshPublicKey: "ssh-rsa ..."
  12. clusters:
  13. - clusterName: "sno-worker-0"
  14. clusterImageSetNameRef: "eko4-img4.11.5-x86-64-appsub"
  15. clusterLabels:
  16. group-du-sno: ""
  17. common-411: true
  18. sites : "example-5g-lab"
  19. vendor: "OpenShift"
  20. clusterNetwork:
  21. - cidr: 10.128.0.0/14
  22. hostPrefix: 23
  23. machineNetwork:
  24. - cidr: 10.19.32.192/26
  25. serviceNetwork:
  26. - 172.30.0.0/16
  27. networkType: "OVNKubernetes"
  28. additionalNTPSources:
  29. - clock.corp.redhat.com
  30. ignitionConfigOverride: '{"ignition":{"version":"3.1.0"},"systemd":{"units":[{"name":"var-mnt.mount","enabled":true,"contents":"[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-images.service\nBindsTo=precache-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-images.service"},{"name":"precache-images.service","enabled":true,"contents":"[Unit]\nDescription=Extracts the precached images in discovery stage\nAfter=var-mnt.mount\nBefore=agent.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ai.sh\n#TimeoutStopSec=30\n\n[Install]\nWantedBy=multi-user.target default.target\nWantedBy=agent.service"}]},"storage":{"files":[{"overwrite":true,"path":"/usr/local/bin/extract-ai.sh","mode":755,"user":{"name":"root"},"contents":{"source":"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ai-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0A%23%20workaround%20while%20https%3A%2F%2Fgithub.com%2Fopenshift%2Fassisted-service%2Fpull%2F3546%0A%23cp%20%2Fvar%2Fmnt%2Fmodified-rhcos-4.10.3-x86_64-metal.x86_64.raw.gz%20%2Fvar%2Ftmp%2F.%0A%0Aexit%200"}},{"overwrite":true,"path":"/usr/local/bin/agent-fix-bz1964591","mode":755,"user":{"name":"root"},"contents":{"source":"data:,%23%21%2Fusr%2Fbin%2Fsh%0A%0A%23%20This%20script%20is%20a%20workaround%20for%20bugzilla%201964591%20where%20symlinks%20inside%20%2Fvar%2Flib%2Fcontainers%2F%20get%0A%23%20corrupted%20under%20some%20circumstances.%0A%23%0A%23%20In%20order%20to%20let%20agent.service%20start%20correctly%20we%20are%20checking%20here%20whether%20the%20requested%0A%23%20container%20image%20exists%20and%20in%20case%20%22podman%20images%22%20returns%20an%20error%20we%20try%20removing%20the%20faulty%0A%23%20image.%0A%23%0A%23%20In%20such%20a%20scenario%20agent.service%20will%20detect%20the%20image%20is%20not%20present%20and%20pull%20it%20again.%20In%20case%0A%23%20the%20image%20is%20present%20and%20can%20be%20detected%20correctly%2C%20no%20any%20action%20is%20required.%0A%0AIMAGE%3D%24%28echo%20%241%20%7C%20sed%20%27s%2F%3A.%2A%2F%2F%27%29%0Apodman%20image%20exists%20%24IMAGE%20%7C%7C%20echo%20%22already%20loaded%22%20%7C%7C%20echo%20%22need%20to%20be%20pulled%22%0A%23podman%20images%20%7C%20grep%20%24IMAGE%20%7C%7C%20podman%20rmi%20--force%20%241%20%7C%7C%20true"}}]}}'
  31. nodes:
  32. - hostName: "snonode.sno-worker-0.example.domain.redhat.com"
  33. role: "master"
  34. bmcAddress: "idrac-virtualmedia+https://10.19.28.53/redfish/v1/Systems/System.Embedded.1"
  35. bmcCredentialsName:
  36. name: "worker0-bmh-secret"
  37. bootMACAddress: "e4:43:4b:bd:90:46"
  38. bootMode: "UEFI"
  39. rootDeviceHints:
  40. deviceName: /dev/nvme0n1
  41. cpuset: "0-1,40-41"
  42. installerArgs: '["--save-partlabel", "data"]'
  43. ignitionConfigOverride: '{"ignition":{"version":"3.1.0"},"systemd":{"units":[{"name":"var-mnt.mount","enabled":true,"contents":"[Unit]\nDescription=Mount partition with artifacts\nBefore=precache-ocp-images.service\nBindsTo=precache-ocp-images.service\nStopWhenUnneeded=true\n\n[Mount]\nWhat=/dev/disk/by-partlabel/data\nWhere=/var/mnt\nType=xfs\nTimeoutSec=30\n\n[Install]\nRequiredBy=precache-ocp-images.service"},{"name":"precache-ocp-images.service","enabled":true,"contents":"[Unit]\nDescription=Extracts the precached OCP images into containers storage\nAfter=var-mnt.mount\nBefore=machine-config-daemon-pull.service nodeip-configuration.service\n\n[Service]\nType=oneshot\nUser=root\nWorkingDirectory=/var/mnt\nExecStart=bash /usr/local/bin/extract-ocp.sh\nTimeoutStopSec=60\n\n[Install]\nWantedBy=multi-user.target"}]},"storage":{"files":[{"overwrite":true,"path":"/usr/local/bin/extract-ocp.sh","mode":755,"user":{"name":"root"},"contents":{"source":"data:,%23%21%2Fbin%2Fbash%0A%0AFOLDER%3D%22%24%7BFOLDER%3A-%24%28pwd%29%7D%22%0AOCP_RELEASE_LIST%3D%22%24%7BOCP_RELEASE_LIST%3A-ocp-images.txt%7D%22%0ABINARY_FOLDER%3D%2Fvar%2Fmnt%0A%0Apushd%20%24FOLDER%0A%0Atotal_copies%3D%24%28sort%20-u%20%24BINARY_FOLDER%2F%24OCP_RELEASE_LIST%20%7C%20wc%20-l%29%20%20%23%20Required%20to%20keep%20track%20of%20the%20pull%20task%20vs%20total%0Acurrent_copy%3D1%0A%0Awhile%20read%20-r%20line%3B%0Ado%0A%20%20uri%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%241%7D%27%29%0A%20%20%23tar%3D%24%28echo%20%22%24line%22%20%7C%20awk%20%27%7Bprint%242%7D%27%29%0A%20%20podman%20image%20exists%20%24uri%0A%20%20if%20%5B%5B%20%24%3F%20-eq%200%20%5D%5D%3B%20then%0A%20%20%20%20%20%20echo%20%22Skipping%20existing%20image%20%24tar%22%0A%20%20%20%20%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20%20%20%20%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%0A%20%20%20%20%20%20continue%0A%20%20fi%0A%20%20tar%3D%24%28echo%20%22%24uri%22%20%7C%20%20rev%20%7C%20cut%20-d%20%22%2F%22%20-f1%20%7C%20rev%20%7C%20tr%20%22%3A%22%20%22_%22%29%0A%20%20tar%20zxvf%20%24%7Btar%7D.tgz%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-f%20%24%7Btar%7D.gz%3B%20fi%0A%20%20echo%20%22Copying%20%24%7Buri%7D%20%5B%24%7Bcurrent_copy%7D%2F%24%7Btotal_copies%7D%5D%22%0A%20%20skopeo%20copy%20dir%3A%2F%2F%24%28pwd%29%2F%24%7Btar%7D%20containers-storage%3A%24%7Buri%7D%0A%20%20if%20%5B%20%24%3F%20-eq%200%20%5D%3B%20then%20rm%20-rf%20%24%7Btar%7D%3B%20current_copy%3D%24%28%28current_copy%20%2B%201%29%29%3B%20fi%0Adone%20%3C%20%24%7BBINARY_FOLDER%7D%2F%24%7BOCP_RELEASE_LIST%7D%0A%0Aexit%200"}}]}}'
  44. nodeNetwork:
  45. config:
  46. interfaces:
  47. - name: ens1f0
  48. type: ethernet
  49. state: up
  50. macAddress: "AA:BB:CC:11:22:33"
  51. ipv4:
  52. enabled: true
  53. dhcp: true
  54. ipv6:
  55. enabled: false
  56. interfaces:
  57. - name: "ens1f0"
  58. macAddress: "AA:BB:CC:11:22:33"

Understanding the clusters.ignitionConfigOverride field

The clusters.ignitionConfigOverride field adds a configuration in Ignition format during the GitOps ZTP discovery stage. The configuration includes systemd services in the ISO mounted in virtual media. This way, the scripts are part of the discovery FCOS live ISO and they can be used to load the Assisted Installer (AI) images.

systemd services

The systemd services are var-mnt.mount and precache-images.services. The precache-images.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The service calls a script called extract-ai.sh.

extract-ai.sh

The extract-ai.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally.

agent-fix-bz1964591

The agent-fix-bz1964591 script is a workaround for an AI issue. To prevent AI from removing the images, which can force the agent.service to pull the images again from the registry, the agent-fix-bz1964591 script checks if the requested container images exist.

Understanding the nodes.installerArgs field

The nodes.installerArgs field allows you to configure how the coreos-installer utility writes the FCOS live ISO to disk. You need to indicate to save the disk partition labeled as data because the artifacts saved in the data partition are needed during the OKD installation stage.

The extra parameters are passed directly to the coreos-installer utility that writes the live FCOS to disk. On the next reboot, the operating system starts from the disk.

You can pass several options to the coreos-installer utility:

  1. OPTIONS:
  2. ...
  3. -u, --image-url <URL>
  4. Manually specify the image URL
  5. -f, --image-file <path>
  6. Manually specify a local image file
  7. -i, --ignition-file <path>
  8. Embed an Ignition config from a file
  9. -I, --ignition-url <URL>
  10. Embed an Ignition config from a URL
  11. ...
  12. --save-partlabel <lx>...
  13. Save partitions with this label glob
  14. --save-partindex <id>...
  15. Save partitions with this number or range
  16. ...
  17. --insecure-ignition
  18. Allow Ignition URL without HTTPS or hash

Understanding the nodes.ignitionConfigOverride field

Similarly to clusters.ignitionConfigOverride, the nodes.ignitionConfigOverride field allows the addtion of configurations in Ignition format to the coreos-installer utility, but at the OKD installation stage. When the FCOS is written to disk, the extra configuration included in the GitOps ZTP discovery ISO is no longer available. During the discovery stage, the extra configuration is stored in the memory of the live OS.

At this stage, the number of container images extracted and loaded is bigger than in the discovery stage. Depending on the OKD release and whether you install the Day-2 Operators, the installation time can vary.

At the installation stage, the var-mnt.mount and precache-ocp.services systemd services are used.

precache-ocp.service

The precache-ocp.service depends on the disk partition to be mounted in /var/mnt by the var-mnt.mount unit. The precache-ocp.service service calls a script called extract-ocp.sh.

To extract all the images before the OKD installation, you must execute precache-ocp.service before executing the machine-config-daemon-pull.service and nodeip-configuration.service services.

extract-ocp.sh

The extract-ocp.sh script extracts and loads the required images from the disk partition to the local container storage. When the script finishes successfully, you can use the images locally.

When you upload the SiteConfig and the optional PolicyGenTemplates custom resources (CRs) to the Git repo, which Argo CD is monitoring, you can start the GitOps ZTP workflow by syncing the CRs with the hub cluster.

Troubleshooting

Rendered catalog is invalid

When you download images by using a local or disconnected registry, you might see the The rendered catalog is invalid error. This means that you are missing certificates of the new registry you want to pull content from.

The factory-precaching-cli tool image is built on a UBI Fedora image. Certificate paths and locations are the same on FCOS.

Example error

  1. Generating list of pre-cached artifacts...
  2. error: unable to run command oc-mirror -c /mnt/imageset.yaml file:///tmp/fp-cli-3218002584/mirror --ignore-history --dry-run: Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/publish
  3. Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/v2
  4. Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/charts
  5. Creating directory: /tmp/fp-cli-3218002584/mirror/oc-mirror-workspace/src/release-signatures
  6. backend is not configured in /mnt/imageset.yaml, using stateless mode
  7. backend is not configured in /mnt/imageset.yaml, using stateless mode
  8. No metadata detected, creating new workspace
  9. level=info msg=trying next host error=failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority host=eko4.cloud.lab.eng.bos.redhat.com:8443
  10. The rendered catalog is invalid.
  11. Run "oc-mirror list operators --catalog CATALOG-NAME --package PACKAGE-NAME" for more information.
  12. error: error rendering new refs: render reference "eko4.cloud.lab.eng.bos.redhat.com:8443/redhat/redhat-operator-index:v4.11": error resolving name : failed to do request: Head "https://eko4.cloud.lab.eng.bos.redhat.com:8443/v2/redhat/redhat-operator-index/manifests/v4.11": x509: certificate signed by unknown authority

Procedure

  1. Copy the registry certificate into your server:

    1. # cp /tmp/eko4-ca.crt /etc/pki/ca-trust/source/anchors/.
  2. Update the certificates truststore:

    1. # update-ca-trust
  3. Mount the host /etc/pki folder into the factory-cli image:

    1. # podman run -v /mnt:/mnt -v /root/.docker:/root/.docker -v /etc/pki:/etc/pki --privileged -it --rm quay.io/openshift-kni/telco-ran-tools:latest -- \
    2. factory-precaching-cli download -r 4.11.5 --acm-version 2.5.4 \
    3. --mce-version 2.0.4 -f /mnt \--img quay.io/custom/repository
    4. --du-profile -s --skip-imageset