Skip this section if you are installing Rancher on a single node with Docker.

This section describes how to install a Kubernetes cluster according to our best practices for the Rancher server environment. This cluster should be dedicated to run only the Rancher server.

For Rancher before v2.4, Rancher should be installed on an RKE (Rancher Kubernetes Engine) Kubernetes cluster. RKE is a CNCF-certified Kubernetes distribution that runs entirely within Docker containers.

In Rancher v2.4, the Rancher management server can be installed on either an RKE cluster or a K3s Kubernetes cluster. K3s is also a fully certified Kubernetes distribution released by Rancher, but is newer than RKE. We recommend installing Rancher on K3s because K3s is easier to use, and more lightweight, with a binary size of less than 100 MB. The Rancher management server can only be run on a Kubernetes cluster in an infrastructure provider where Kubernetes is installed using RKE or K3s. Use of Rancher on hosted Kubernetes providers, such as EKS, is not supported. Note: After Rancher is installed on an RKE cluster, there is no migration path to a K3s setup at this time.

As of Rancher v2.5, Rancher can be installed on any Kubernetes cluster, including hosted Kubernetes providers.

The steps to set up an air-gapped Kubernetes cluster on RKE or K3s are shown below.

In this guide, we are assuming you have created your nodes in your air gapped environment and have a secure Docker private registry on your bastion server.

Installation Outline

  1. Prepare Images Directory
  2. Create Registry YAML
  3. Install K3s
  4. Save and Start Using the kubeconfig File

1. Prepare Images Directory

Obtain the images tar file for your architecture from the releases page for the version of K3s you will be running.

Place the tar file in the images directory before starting K3s on each node, for example:

  1. sudo mkdir -p /var/lib/rancher/k3s/agent/images/
  2. sudo cp ./k3s-airgap-images-$ARCH.tar /var/lib/rancher/k3s/agent/images/

2. Create Registry YAML

Create the registries.yaml file at /etc/rancher/k3s/registries.yaml. This will tell K3s the necessary details to connect to your private registry.

The registries.yaml file should look like this before plugging in the necessary information:

  1. ---
  2. mirrors:
  3. customreg:
  4. endpoint:
  5. - "https://ip-to-server:5000"
  6. configs:
  7. customreg:
  8. auth:
  9. username: xxxxxx # this is the registry username
  10. password: xxxxxx # this is the registry password
  11. tls:
  12. cert_file: <path to the cert file used in the registry>
  13. key_file: <path to the key file used in the registry>
  14. ca_file: <path to the ca file used in the registry>

Note, at this time only secure registries are supported with K3s (SSL with custom CA).

For more information on private registries configuration file for K3s, refer to the K3s documentation.

3. Install K3s

Rancher needs to be installed on a supported Kubernetes version. To find out which versions of Kubernetes are supported for your Rancher version, refer to the support maintenance terms.

To specify the K3s version, use the INSTALL_K3S_VERSION environment variable when running the K3s installation script.

Obtain the K3s binary from the releases page, matching the same version used to get the airgap images tar.

Also obtain the K3s install script at https://get.k3s.io

Place the binary in /usr/local/bin on each node. Place the install script anywhere on each node, and name it install.sh.

Install K3s on each server:

  1. INSTALL_K3S_SKIP_DOWNLOAD=true ./install.sh

Install K3s on each agent:

  1. INSTALL_K3S_SKIP_DOWNLOAD=true K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken ./install.sh

Note, take care to ensure you replace myserver with the IP or valid DNS of the server and replace mynodetoken with the node-token from the server. The node-token is on the server at /var/lib/rancher/k3s/server/node-token

Note: K3s additionally provides a --resolv-conf flag for kubelets, which may help with configuring DNS in air-gap networks.

4. Save and Start Using the kubeconfig File

When you installed K3s on each Rancher server node, a kubeconfig file was created on the node at /etc/rancher/k3s/k3s.yaml. This file contains credentials for full access to the cluster, and you should save this file in a secure location.

To use this kubeconfig file,

  1. Install kubectl, a Kubernetes command-line tool.
  2. Copy the file at /etc/rancher/k3s/k3s.yaml and save it to the directory ~/.kube/config on your local machine.
  3. In the kubeconfig file, the server directive is defined as localhost. Configure the server as the DNS of your load balancer, referring to port 6443. (The Kubernetes API server will be reached at port 6443, while the Rancher server will be reached at ports 80 and 443.) Here is an example k3s.yaml:
  1. apiVersion: v1
  2. clusters:
  3. - cluster:
  4. certificate-authority-data: [CERTIFICATE-DATA]
  5. server: [LOAD-BALANCER-DNS]:6443 # Edit this line
  6. name: default
  7. contexts:
  8. - context:
  9. cluster: default
  10. user: default
  11. name: default
  12. current-context: default
  13. kind: Config
  14. preferences: {}
  15. users:
  16. - name: default
  17. user:
  18. password: [PASSWORD]
  19. username: admin

Result: You can now use kubectl to manage your K3s cluster. If you have more than one kubeconfig file, you can specify which one you want to use by passing in the path to the file when using kubectl:

  1. kubectl --kubeconfig ~/.kube/config/k3s.yaml get pods --all-namespaces

For more information about the kubeconfig file, refer to the K3s documentation or the official Kubernetes documentation about organizing cluster access using kubeconfig files.

Note on Upgrading

Upgrading an air-gap environment can be accomplished in the following manner:

  1. Download the new air-gap images (tar file) from the releases page for the version of K3s you will be upgrading to. Place the tar in the /var/lib/rancher/k3s/agent/images/ directory on each node. Delete the old tar file.
  2. Copy and replace the old K3s binary in /usr/local/bin on each node. Copy over the install script at https://get.k3s.io (as it is possible it has changed since the last release). Run the script again just as you had done in the past with the same environment variables.
  3. Restart the K3s service (if not restarted automatically by installer).

We will create a Kubernetes cluster using Rancher Kubernetes Engine (RKE). Before being able to start your Kubernetes cluster, you’ll need to install RKE and create a RKE config file.

1. Install RKE

Install RKE by following the instructions in the RKE documentation.

2. Create an RKE Config File

From a system that can access ports 22/TCP and 6443/TCP on the Linux host node(s) that you set up in a previous step, use the sample below to create a new file named rancher-cluster.yml.

This file is an RKE configuration file, which is a configuration for the cluster you’re deploying Rancher to.

Replace values in the code sample below with help of the RKE Options table. Use the IP address or DNS names of the 3 nodes you created.

Tip: For more details on the options available, see the RKE Config Options.

RKE Options

OptionRequiredDescription
addressThe DNS or IP address for the node within the air gapped network.
userA user that can run Docker commands.
roleList of Kubernetes roles assigned to the node.
internal_addressoptional1The DNS or IP address used for internal cluster traffic.
ssh_key_pathPath to the SSH private key used to authenticate to the node (defaults to ~/.ssh/id_rsa).

1 Some services like AWS EC2 require setting the internal_address if you want to use self-referencing security groups or firewalls.

  1. nodes:
  2. - address: 10.10.3.187 # node air gap network IP
  3. internal_address: 172.31.7.22 # node intra-cluster IP
  4. user: rancher
  5. role: ['controlplane', 'etcd', 'worker']
  6. ssh_key_path: /home/user/.ssh/id_rsa
  7. - address: 10.10.3.254 # node air gap network IP
  8. internal_address: 172.31.13.132 # node intra-cluster IP
  9. user: rancher
  10. role: ['controlplane', 'etcd', 'worker']
  11. ssh_key_path: /home/user/.ssh/id_rsa
  12. - address: 10.10.3.89 # node air gap network IP
  13. internal_address: 172.31.3.216 # node intra-cluster IP
  14. user: rancher
  15. role: ['controlplane', 'etcd', 'worker']
  16. ssh_key_path: /home/user/.ssh/id_rsa
  17. private_registries:
  18. - url: <REGISTRY.YOURDOMAIN.COM:PORT> # private registry url
  19. user: rancher
  20. password: '*********'
  21. is_default: true

3. Run RKE

After configuring rancher-cluster.yml, bring up your Kubernetes cluster:

  1. rke up --config ./rancher-cluster.yml

4. Save Your Files

Important The files mentioned below are needed to maintain, troubleshoot and upgrade your cluster.

Save a copy of the following files in a secure location:

  • rancher-cluster.yml: The RKE cluster configuration file.
  • kube_config_rancher-cluster.yml: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.
  • rancher-cluster.rkestate: The Kubernetes Cluster State file, this file contains the current state of the cluster including the RKE configuration and the certificates.

    The Kubernetes Cluster State file is only created when using RKE v0.2.0 or higher.

Note: The “rancher-cluster” parts of the two latter file names are dependent on how you name the RKE cluster configuration file.

Issues or errors?

See the Troubleshooting page.

Next: Install Rancher