FCOS image layering

Fedora CoreOS (FCOS) image layering allows you to easily extend the functionality of your base FCOS image by layering additional images onto the base image. This layering does not modify the base FCOS image. Instead, it creates a custom layered image that includes all FCOS functionality and adds additional functionality to specific nodes in the cluster.

You create a custom layered image by using a Containerfile and applying it to nodes by using a MachineConfig object. The Machine Config Operator overrides the base FCOS image, as specified by the osImageURL value in the associated machine config, and boots the new image. You can remove the custom layered image by deleting the machine config, The MCO reboots the nodes back to the base FCOS image.

With FCOS image layering, you can install RPMs into your base image, and your custom content will be booted alongside FCOS. The Machine Config Operator (MCO) can roll out these custom layered images and monitor these custom containers in the same way it does for the default FCOS image. FCOS image layering gives you greater flexibility in how you manage your FCOS nodes.

Installing realtime kernel and extensions RPMs as custom layered content is not recommended. This is because these RPMs can conflict with RPMs installed by using a machine config. If there is a conflict, the MCO enters a degraded state when it tries to install the machine config RPM. You need to remove the conflicting extension from your machine config before proceeding.

As soon as you apply the custom layered image to your cluster, you effectively take ownership of your custom layered images and those nodes. While Red Hat remains responsible for maintaining and updating the base FCOS image on standard nodes, you are responsible for maintaining and updating images on nodes that use a custom layered image. You assume the responsibility for the package you applied with the custom layered image and any issues that might arise with the package.

To apply a custom layered image, you create a Containerfile that references an OKD image and the RPM that you want to apply. You then push the resulting custom layered image to an image registry. In a non-production OKD cluster, create a MachineConfig object for the targeted node pool that points to the new image.

Use the same base FCOS image installed on the rest of your cluster. Use the oc adm release info —image-for rhel-coreos command to obtain the base image used in your cluster.

FCOS image layering allows you to use the following types of images to create custom layered images:

  • OKD Hotfixes. You can work with Customer Experience and Engagement (CEE) to obtain and apply Hotfix packages on top of your FCOS image. In some instances, you might want a bug fix or enhancement before it is included in an official OKD release. FCOS image layering allows you to easily add the Hotfix before it is officially released and remove the Hotfix when the underlying FCOS image incorporates the fix.

    Some Hotfixes require a Red Hat Support Exception and are outside of the normal scope of OKD support coverage or life cycle policies.

    In the event you want a Hotfix, it will be provided to you based on Red Hat Hotfix policy. Apply it on top of the base image and test that new custom layered image in a non-production environment. When you are satisfied that the custom layered image is safe to use in production, you can roll it out on your own schedule to specific node pools. For any reason, you can easily roll back the custom layered image and return to using the default FCOS.

    Example Containerfile to apply a Hotfix

    1. # Using a 4.12.0 image
    2. FROM quay.io/openshift-release-dev/ocp-release@sha256...
    3. #Install hotfix rpm
    4. RUN rpm-ostree override replace https://example.com/myrepo/haproxy-1.0.16-5.el8.src.rpm && \
    5. rpm-ostree cleanup -m && \
    6. ostree container commit
  • Fedora packages. You can download Fedora packages from the Red Hat Customer Portal, such as chrony, firewalld, and iputils.

    Example Containerfile to apply the firewalld utility

    1. FROM quay.io/openshift-release-dev/ocp-release@sha256...
    2. ADD configure-firewall-playbook.yml .
    3. RUN rpm-ostree install firewalld ansible && \
    4. ansible-playbook configure-firewall-playbook.yml && \
    5. rpm -e ansible && \
    6. ostree container commit

    Example Containerfile to apply the libreswan utility

    1. # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos`
    2. # hadolint ignore=DL3006
    3. FROM quay.io/openshift-release/ocp-release@sha256...
    4. # Install our config file
    5. COPY my-host-to-host.conf /etc/ipsec.d/
    6. # RHEL entitled host is needed here to access RHEL packages
    7. # Install libreswan as extra RHEL package
    8. RUN rpm-ostree install libreswan && \
    9. systemctl enable ipsec && \
    10. ostree container commit

    Because libreswan requires additional RHEL packages, the image must be built on an entitled Fedora host.

  • Third-party packages. You can download and install RPMs from third-party organizations, such as the following types of packages:

    • Bleeding edge drivers and kernel enhancements to improve performance or add capabilities.

    • Forensic client tools to investigate possible and actual break-ins.

    • Security agents.

    • Inventory agents that provide a coherent view of the entire cluster.

    • SSH Key management packages.

    Example Containerfile to apply a third-party package from EPEL

    1. # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos`
    2. # hadolint ignore=DL3006
    3. FROM quay.io/openshift-release/ocp-release@sha256...
    4. #Enable EPEL (more info at https://docs.fedoraproject.org/en-US/epel/ ) and install htop
    5. RUN rpm-ostree install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm && \
    6. rpm-ostree install htop && \
    7. ostree container commit

    Example Containerfile to apply a third-party package that has Fedora dependencies

    1. # Get RHCOS base image of target cluster `oc adm release info --image-for rhel-coreos`
    2. # hadolint ignore=DL3006
    3. FROM quay.io/openshift-release/ocp-release@sha256...
    4. # RHEL entitled host is needed here to access RHEL packages
    5. # Install fish as third party package from EPEL
    6. RUN rpm-ostree install https://dl.fedoraproject.org/pub/epel/9/Everything/x86_64/Packages/f/fish-3.3.1-3.el9.x86_64.rpm && \
    7. ostree container commit

    This Containerfile installs the Linux fish program. Because fish requires additional RHEL packages, the image must be built on an entitled Fedora host.

After you create the machine config, the Machine Config Operator (MCO) performs the following steps:

  1. Renders a new machine config for the specified pool or pools.

  2. Performs cordon and drain operations on the nodes in the pool or pools.

  3. Writes the rest of the machine config parameters onto the nodes.

  4. Applies the custom layered image to the node.

  5. Reboots the node using the new image.

It is strongly recommended that you test your images outside of your production environment before rolling out to your cluster.

Applying a FCOS custom layered image

You can easily configure Fedora CoreOS (FCOS) image layering on the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the new custom layered image, overriding the base Fedora CoreOS (FCOS) image.

To apply a custom layered image to your cluster, you must have the custom layered image in a repository that your cluster can access. Then, create a MachineConfig object that points to the custom layered image. You need a separate MachineConfig object for each machine config pool that you want to configure.

When you configure a custom layered image, OKD no longer automatically updates any node that uses the custom layered image. You become responsible for manually updating your nodes as appropriate. If you roll back the custom layer, OKD will again automatically update the node. See the Additional resources section that follows for important information about updating nodes that use a custom layered image.

Prerequisites

  • You must create a custom layered image that is based on an OKD image digest, not a tag.

    You should use the same base FCOS image that is installed on the rest of your cluster. Use the oc adm release info —image-for rhel-coreos command to obtain the base image being used in your cluster.

    For example, the following Containerfile creates a custom layered image from an OKD 4 image and overrides the kernel package with one from CentOS 9 Stream:

    Example Containerfile for a custom layer image

    1. # Using a 4.0 image
    2. FROM quay.io/openshift-release/ocp-release@sha256... (1)
    3. #Install hotfix rpm
    4. RUN rpm-ostree cliwrap install-to-root / && \ (2)
    5. rpm-ostree override replace http://mirror.stream.centos.org/9-stream/BaseOS/x86_64/os/Packages/kernel-{,core-,modules-,modules-core-,modules-extra-}5.14.0-295.el9.x86_64.rpm && \ (3)
    6. rpm-ostree cleanup -m && \
    7. ostree container commit
    1Specifies the FCOS base image of your cluster.
    2Enables cliwrap. This is currently required to intercept some command invocations made from kernel scripts.
    3Replaces the kernel packages.

    Instructions on how to create a Containerfile are beyond the scope of this documentation.

  • Because the process for building a custom layered image is performed outside of the cluster, you must use the --authfile /path/to/pull-secret option with Podman or Buildah. Alternatively, to have the pull secret read by these tools automatically, you can add it to one of the default file locations: ~/.docker/config.json, $XDG_RUNTIME_DIR/containers/auth.json, ~/.docker/config.json, or ~/.dockercfg. Refer to the containers-auth.json man page for more information.

  • You must push the custom layered image to a repository that your cluster can access.

Procedure

  1. Create a machine config file.

    1. Create a YAML file similar to the following:

      1. apiVersion: machineconfiguration.openshift.io/v1
      2. kind: MachineConfig
      3. metadata:
      4. labels:
      5. machineconfiguration.openshift.io/role: worker (1)
      6. name: os-layer-custom
      7. spec:
      8. osImageURL: quay.io/my-registry/custom-image@sha256... (2)
      1Specifies the machine config pool to apply the custom layered image.
      2Specifies the path to the custom layered image in the repository.
    2. Create the MachineConfig object:

      1. $ oc create -f <file_name>.yaml

      It is strongly recommended that you test your images outside of your production environment before rolling out to your cluster.

Verification

You can verify that the custom layered image is applied by performing any of the following checks:

  1. Check that the worker machine config pool has rolled out with the new machine config:

    1. Check that the new machine config is created:

      1. $ oc get mc

      Sample output

      1. NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE
      2. 00-master 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      3. 00-worker 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      4. 01-master-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      5. 01-master-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      6. 01-worker-container-runtime 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      7. 01-worker-kubelet 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      8. 99-master-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      9. 99-master-ssh 3.2.0 98m
      10. 99-worker-generated-registries 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      11. 99-worker-ssh 3.2.0 98m
      12. os-layer-custom 10s (1)
      13. rendered-master-15961f1da260f7be141006404d17d39b 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      14. rendered-worker-5aff604cb1381a4fe07feaf1595a797e 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 95m
      15. rendered-worker-5de4837625b1cbc237de6b22bc0bc873 5bdb57489b720096ef912f738b46330a8f577803 3.2.0 4s (2)
      1New machine config
      2New rendered machine config
    2. Check that the osImageURL value in the new machine config points to the expected image:

      1. $ oc describe mc rendered-master-4e8be63aef68b843b546827b6ebe0913

      Example output

      1. Name: rendered-master-4e8be63aef68b843b546827b6ebe0913
      2. Namespace:
      3. Labels: <none>
      4. Annotations: machineconfiguration.openshift.io/generated-by-controller-version: 8276d9c1f574481043d3661a1ace1f36cd8c3b62
      5. machineconfiguration.openshift.io/release-image-version: 4.0-ec.3
      6. API Version: machineconfiguration.openshift.io/v1
      7. Kind: MachineConfig
      8. ...
      9. Os Image URL: quay.io/my-registry/custom-image@sha256...
    3. Check that the associated machine config pool is updating with the new machine config:

      1. $ oc get mcp

      Sample output

      1. NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
      2. master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m
      3. worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m (1)
      1When the UPDATING field is True, the machine config pool is updating with the new machine config. When the field becomes False, the worker machine config pool has rolled out to the new machine config.
    4. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:

      1. $ oc get nodes

      Example output

      1. NAME STATUS ROLES AGE VERSION
      2. ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.28.5
      3. ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.28.5
      4. ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5
      5. ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5
      6. ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5
      7. ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.28.5
  2. When the node is back in the Ready state, check that the node is using the custom layered image:

    1. Open an oc debug session to the node. For example:

      1. $ oc debug node/ip-10-0-155-125.us-west-1.compute.internal
    2. Set /host as the root directory within the debug shell:

      1. sh-4.4# chroot /host
    3. Run the rpm-ostree status command to view that the custom layered image is in use:

      1. sh-4.4# sudo rpm-ostree status

      Example output

      1. State: idle
      2. Deployments:
      3. * ostree-unverified-registry:quay.io/my-registry/...
      4. Digest: sha256:...

Additional resources

Updating with a FCOS custom layered image

Removing a FCOS custom layered image

You can easily revert Fedora CoreOS (FCOS) image layering from the nodes in specific machine config pools. The Machine Config Operator (MCO) reboots those nodes with the cluster base Fedora CoreOS (FCOS) image, overriding the custom layered image.

To remove a Fedora CoreOS (FCOS) custom layered image from your cluster, you need to delete the machine config that applied the image.

Procedure

  1. Delete the machine config that applied the custom layered image.

    1. $ oc delete mc os-layer-custom

    After deleting the machine config, the nodes reboot.

Verification

You can verify that the custom layered image is removed by performing any of the following checks:

  1. Check that the worker machine config pool is updating with the previous machine config:

    1. $ oc get mcp

    Sample output

    1. NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE
    2. master rendered-master-6faecdfa1b25c114a58cf178fbaa45e2 True False False 3 3 3 0 39m
    3. worker rendered-worker-6b000dbc31aaee63c6a2d56d04cd4c1b False True False 3 0 0 0 39m (1)
    1When the UPDATING field is True, the machine config pool is updating with the previous machine config. When the field becomes False, the worker machine config pool has rolled out to the previous machine config.
  2. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied:

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. ip-10-0-148-79.us-west-1.compute.internal Ready worker 32m v1.28.5
    3. ip-10-0-155-125.us-west-1.compute.internal Ready,SchedulingDisabled worker 35m v1.28.5
    4. ip-10-0-170-47.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5
    5. ip-10-0-174-77.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5
    6. ip-10-0-211-49.us-west-1.compute.internal Ready control-plane,master 42m v1.28.5
    7. ip-10-0-218-151.us-west-1.compute.internal Ready worker 31m v1.28.5
  3. When the node is back in the Ready state, check that the node is using the base image:

    1. Open an oc debug session to the node. For example:

      1. $ oc debug node/ip-10-0-155-125.us-west-1.compute.internal
    2. Set /host as the root directory within the debug shell:

      1. sh-4.4# chroot /host
    3. Run the rpm-ostree status command to view that the custom layered image is in use:

      1. sh-4.4# sudo rpm-ostree status

      Example output

      1. State: idle
      2. Deployments:
      3. * ostree-unverified-registry:podman pull quay.io/openshift-release-dev/ocp-release@sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73
      4. Digest: sha256:e2044c3cfebe0ff3a99fc207ac5efe6e07878ad59fd4ad5e41f88cb016dacd73

Updating with a FCOS custom layered image

When you configure Fedora CoreOS (FCOS) image layering, OKD no longer automatically updates the node pool that uses the custom layered image. You become responsible to manually update your nodes as appropriate.

To update a node that uses a custom layered image, follow these general steps:

  1. The cluster automatically upgrades to version x.y.z+1, except for the nodes that use the custom layered image.

  2. You could then create a new Containerfile that references the updated OKD image and the RPM that you had previously applied.

  3. Create a new machine config that points to the updated custom layered image.

Updating a node with a custom layered image is not required. However, if that node gets too far behind the current OKD version, you could experience unexpected results.