OKD installation overview

About OKD installation

The OKD installation program offers four methods for deploying a cluster:

  • Interactive: You can deploy a cluster with the web-based Assisted Installer. This is the recommended approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OKD, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios.

  • Local Agent-based: You can deploy a cluster locally with the agent-based installer for air-gapped or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the agent-based installer first. Configuration is done with a commandline interface. This approach is ideal for air-gapped or restricted networks.

  • Automated: You can deploy a cluster on installer-provisioned infrastructure and the cluster it maintains. The installer uses each cluster host’s baseboard management controller (BMC) for provisioning. You can deploy clusters with both connected or air-gapped or restricted networks.

  • Full control: You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters with both connected or air-gapped or restricted networks.

The clusters have the following characteristics:

  • Highly available infrastructure with no single points of failure is available by default.

  • Administrators maintain control over what updates are applied and when.

About the installation program

You can use the installation program to deploy each type of cluster. The installation program generates main assets such as Ignition config files for the bootstrap, control plane (master), and worker machines. You can start an OKD cluster with these three configurations and correctly configured infrastructure.

The OKD installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets dependencies.

OKD installation targets and dependencies

Figure 1. OKD installation targets and dependencies

About Fedora CoreOS (FCOS)

Post-installation, each cluster machine uses Fedora CoreOS (FCOS) as the operating system. FCOS is the immutable container host version of Fedora and features a Fedora kernel with SELinux enabled by default. It includes the kubelet, which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes.

Every control plane machine in an OKD 4.13 cluster must use FCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree. Together, these technologies enable OKD to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up-to-date. These in-place updates can reduce the burden on operations teams.

If you use FCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation.

Glossary of common terms for OKD installing

This glossary defines common terms that are used in the installation content. These terms help you understand installation effectively.

Assisted Installer

An installer hosted at console.redhat.com that provides a web user interface or a RESTful API for creating a cluster configuration. The Assisted Installer generates a discovery image. Cluster machines boot with the discovery image, which installs FCOS and an agent. Together, the Assisted Installer and agent provide pre-installation validation and installation for the cluster.

Agent-based installer

An installer similar to the Assisted Installer, but you must download the agent-based installer first. The agent-based installer is ideal for air-gapped/restricted networks.

Bootstrap node

A temporary machine that runs a minimal Kubernetes configuration to deploy the OKD control plane.

Control plane

A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. Also known as control plane machines.

Compute node

Nodes that are responsible for executing workloads for cluster users. Also known as worker nodes.

Disconnected installation

There are situations where parts of a data center might not have access to the internet, even through proxy servers. You can still install the OKD in these environments, but you must download the required software and images and make them available to the disconnected environment.

The OKD installation program

A program that provisions the infrastructure and deploys a cluster.

Installer-provisioned infrastructure

The installation program deploys and configures the infrastructure that the cluster runs on.

Ignition config files

A file that Ignition uses to configure Fedora CoreOS (FCOS) during operating system initialization. The installation program generates different Ignition config files to initialize bootstrap, control plane, and worker nodes.

Kubernetes manifests

Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemonsets etc.

Kubelet

A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod.

Load balancers

A load balancer serves as the single point of contact for clients. Load balancers for the API distribute incoming traffic across control plane nodes.

Machine Config Operator

An Operator that manages and applies configuration and updates of the base operating system and container runtime, including everything between the kernel and kubelet for the nodes in the cluster.

Operators

The preferred method of packaging, deploying, and managing a Kubernetes application in an OKD cluster. An operator takes human operational knowledge and encodes it into software that is easily packaged and shared with customers.

User-provisioned infrastructure

You can install OKD on infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided.

Installation process

Except for the Assisted Installer, when you install an OKD cluster, you download the installation program from

https://github.com/openshift/okd/releases.

In OKD 4.13, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type.

  • To deploy a cluster with the Assisted Installer, you configure the cluster settings using the Assisted Installer. There is no installer to download and configure. After you complete the configuration, you download a discovery ISO and boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines.

  • To deploy clusters with the agent-based installer, you download the agent-based installer first. Then, you configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for air-gapped or restricted network environments.

  • For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines.

  • If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines.

The installer uses three sets of files during installation: an installation configuration file that is named install-config.yaml, Kubernetes manifests, and Ignition config files for your machine types.

It is possible to modify Kubernetes and the Ignition config files that control the underlying FCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support.

The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster.

The installation configuration files are all pruned when you run the installation program, so be sure to back up all configuration files that you want to use again.

You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation.

The installation process with the Assisted Installer

Installation with the Assisted Installer involves creating a cluster configuration interactively using the web-based user interface or using the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs FCOS and an agent, and the agent handles the provisioning for you. You can install OKD with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal, and on other platforms without integration.

OKD manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied.

If possible, use this feature to avoid having to download and configure the agent-based installer.

The installation process with agent-based infrastructure

Agent-based installation is similar to using the Assisted Installer, except that you download and install the agent-based installer first. Agent-based installation is recommended when you want all the convenience of the Assisted Installer, but you need to install with an air-gapped or disconnected network.

If possible, use this feature to avoid having to create a provisioner machine with a bootstrap VM and provision and maintain the cluster infrastructure.

The installation process with installer-provisioned infrastructure

The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster.

You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network.

If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure.

With installer-provisioned infrastructure clusters, OKD manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied.

The installation process with user-provisioned infrastructure

You can also install OKD on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided.

If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself, including:

  • The underlying infrastructure for the control plane and compute machines that make up the cluster

  • Load balancers

  • Cluster networking, including the DNS records and required subnets

  • Storage for the cluster infrastructure and applications

If your cluster uses user-provisioned infrastructure, you have the option of adding Fedora compute machines to your cluster.

Installation process details

Because each machine in the cluster requires information about the cluster when it is provisioned, OKD uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. It boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process:

Creating bootstrap

Figure 2. Creating the bootstrap, control plane, and compute machines

After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually.

  • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Bootstrapping a cluster involves the following steps:

  1. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. (Requires manual intervention if you provision the infrastructure)

  2. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane.

  3. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure)

  4. The temporary control plane schedules the production control plane to the production control plane machines.

  5. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes.

  6. The temporary control plane shuts down and passes control to the production control plane.

  7. The bootstrap machine injects OKD components into the production control plane.

  8. The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure)

  9. The control plane sets up the compute nodes.

  10. The control plane installs additional services in the form of a set of Operators.

The result of this bootstrapping process is a running OKD cluster. The cluster then downloads and configures remaining components needed for the day-to-day operation, including the creation of compute machines in supported environments.

Verifying node state after installation

The OKD installation completes when the following installation health checks are successful:

  • The provisioner can access the OKD web console.

  • All control plane nodes are ready.

  • All cluster Operators are available.

After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. It can take some time before all worker nodes report as READY. For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators’ own resources and not on the state of the nodes.

After your installation completes, you can continue to monitor the condition of the nodes in your cluster by using the following steps.

Prerequisites

  • The installation program resolves successfully in the terminal.

Procedure

  1. Show the status of all worker nodes:

    1. $ oc get nodes

    Example output

    1. NAME STATUS ROLES AGE VERSION
    2. example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a
    3. example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a
    4. example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a
    5. example-control1.example.com Ready master 52m v1.21.6+bb8d50a
    6. example-control2.example.com Ready master 55m v1.21.6+bb8d50a
    7. example-control3.example.com Ready master 55m v1.21.6+bb8d50a
  2. Show the phase of all worker machine nodes:

    1. $ oc get machines -A

    Example output

    1. NAMESPACE NAME PHASE TYPE REGION ZONE AGE
    2. openshift-machine-api example-zbbt6-master-0 Running 95m
    3. openshift-machine-api example-zbbt6-master-1 Running 95m
    4. openshift-machine-api example-zbbt6-master-2 Running 95m
    5. openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m
    6. openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m
    7. openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m
    8. openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m

Additional resources

Installation scope

The scope of the OKD installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes.

Additional resources

Supported platforms for OKD clusters

In OKD 4.13, you can install a cluster that uses installer-provisioned infrastructure on the following platforms:

  • Amazon Web Services (AWS)

  • Google Cloud Platform (GCP)

  • Microsoft Azure

  • Microsoft Azure Stack Hub

  • OpenStack versions 16.1 and 16.2

    • The latest OKD release supports both the latest OpenStack long-life release and intermediate release. For complete OpenStack release compatibility, see the OKD on OpenStack support matrix.
  • IBM Cloud VPC

  • Nutanix

  • oVirt

  • VMware vSphere

  • VMware Cloud (VMC) on AWS

  • Alibaba Cloud

  • Bare metal

For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat.

After installation, the following changes are not supported:

  • Mixing cloud provider platforms

  • Mixing cloud provider components, such as using a persistent storage framework from a differing platform than what the cluster is installed on

In OKD 4.13, you can install a cluster that uses user-provisioned infrastructure on the following platforms:

  • AWS

  • Azure

  • Azure Stack Hub

  • GCP

  • OpenStack versions 16.1 and 16.2

  • oVirt

  • VMware vSphere

  • VMware Cloud on AWS

  • Bare metal

  • IBM zSystems or IBM® LinuxONE

  • IBM Power

Depending on the supported cases for the platform, installations on user-provisioned infrastructure allow you to run machines with full internet access, place your cluster behind a proxy, or perform a restricted network installation. In a restricted network installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a restricted network installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access.

The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms.

Additional resources