Air-gapped Installation

KubeKey is an open-source, lightweight tool for deploying Kubernetes clusters. It allows you to install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and other cloud-native plugins in a flexible, fast, and convenient way. Additionally, it is an effective tool for scaling and upgrading clusters.

In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides a solution for air-gapped installation of Kubernetes clusters. A manifest file describes information of the current Kubernetes cluster and defines content in an artifact. Previously, users had to prepare deployment tools, image (.tar) file, and other binaries as the Kubernetes version and image to deploy are different. Now, with KubeKey, air-gapped installation can never be so easy. You simply use a manifest file to define what you need for your cluster in air-gapped environments, and then export the artifact file to quickly and easily deploy image registries and Kubernetes cluster.

Prerequisites

Host IPHost NameUsage
192.168.0.2node1Online host for packaging the source cluster with Kubernetes v1.22.10 and KubeSphere v3.3.0 installed
192.168.0.3node2Control plane node of the air-gapped environment
192.168.0.4node3Image registry node of the air-gapped environment

Preparations

  1. Run the following commands to download KubeKey v2.2.1.

    Download KubeKey from its GitHub Release Page or use the following command directly.

    1. curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -

    Run the following command first to make sure you download KubeKey from the correct zone.

    1. export KKZONE=cn

    Run the following command to download KubeKey:

    1. curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.1 sh -
  2. In the source cluster, use KubeKey to create a manifest. The following two methods are supported:

    • (Recommended) In the created cluster, run the following command to create a manifest file:
    1. ./kk create manifest
    • Create and compile the manifest file manually according to the template. For more information, see manifest-example.
  3. Run the following command to modify the manifest configurations in the source cluster.

    1. vim manifest.yaml
    1. ---
    2. apiVersion: kubekey.kubesphere.io/v1alpha2
    3. kind: Manifest
    4. metadata:
    5. name: sample
    6. spec:
    7. arches:
    8. - amd64
    9. operatingSystems:
    10. - arch: amd64
    11. type: linux
    12. id: centos
    13. version: "7"
    14. repository:
    15. iso:
    16. localPath:
    17. url: https://github.com/kubesphere/kubekey/releases/download/v2.2.1/centos7-rpms-amd64.iso
    18. - arch: amd64
    19. type: linux
    20. id: ubuntu
    21. version: "20.04"
    22. repository:
    23. iso:
    24. localPath:
    25. url: https://github.com/kubesphere/kubekey/releases/download/v2.2.1/ubuntu-20.04-debs-amd64.iso
    26. kubernetesDistributions:
    27. - type: kubernetes
    28. version: v1.22.10
    29. components:
    30. helm:
    31. version: v3.6.3
    32. cni:
    33. version: v0.9.1
    34. etcd:
    35. version: v3.4.13
    36. ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
    37. ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
    38. containerRuntimes:
    39. - type: docker
    40. version: 20.10.8
    41. crictl:
    42. version: v1.22.0
    43. ##
    44. docker-registry:
    45. version: "2"
    46. harbor:
    47. version: v2.4.1
    48. docker-compose:
    49. version: v2.2.2
    50. images:
    51. - docker.io/kubesphere/kube-apiserver:v1.22.10
    52. - docker.io/kubesphere/kube-controller-manager:v1.22.10
    53. - docker.io/kubesphere/kube-proxy:v1.22.10
    54. - docker.io/kubesphere/kube-scheduler:v1.22.10
    55. - docker.io/kubesphere/pause:3.5
    56. - docker.io/coredns/coredns:1.8.0
    57. - docker.io/calico/cni:v3.20.0
    58. - docker.io/calico/kube-controllers:v3.20.0
    59. - docker.io/calico/node:v3.20.0
    60. - docker.io/calico/pod2daemon-flexvol:v3.20.0
    61. - docker.io/calico/typha:v3.20.0
    62. - docker.io/kubesphere/flannel:v0.12.0
    63. - docker.io/openebs/provisioner-localpv:2.10.1
    64. - docker.io/openebs/linux-utils:2.10.0
    65. - docker.io/library/haproxy:2.3
    66. - docker.io/kubesphere/nfs-subdir-external-provisioner:v4.0.2
    67. - docker.io/kubesphere/k8s-dns-node-cache:1.15.12
    68. - docker.io/kubesphere/ks-installer:v3.3.0
    69. - docker.io/kubesphere/ks-apiserver:v3.3.0
    70. - docker.io/kubesphere/ks-console:v3.3.0
    71. - docker.io/kubesphere/ks-controller-manager:v3.3.0
    72. - docker.io/kubesphere/kubectl:v1.20.0
    73. - docker.io/kubesphere/kubectl:v1.21.0
    74. - docker.io/kubesphere/kubectl:v1.22.0
    75. - docker.io/kubesphere/kubefed:v0.8.1
    76. - docker.io/kubesphere/tower:v0.2.0
    77. - docker.io/minio/minio:RELEASE.2019-08-07T01-59-21Z
    78. - docker.io/minio/mc:RELEASE.2019-08-07T23-14-43Z
    79. - docker.io/csiplugin/snapshot-controller:v4.0.0
    80. - docker.io/kubesphere/nginx-ingress-controller:v1.1.0
    81. - docker.io/mirrorgooglecontainers/defaultbackend-amd64:1.4
    82. - docker.io/kubesphere/metrics-server:v0.4.2
    83. - docker.io/library/redis:5.0.14-alpine
    84. - docker.io/library/haproxy:2.0.25-alpine
    85. - docker.io/library/alpine:3.14
    86. - docker.io/osixia/openldap:1.3.0
    87. - docker.io/kubesphere/netshoot:v1.0
    88. - docker.io/kubeedge/cloudcore:v1.9.2
    89. - docker.io/kubeedge/iptables-manager:v1.9.2
    90. - docker.io/kubesphere/edgeservice:v0.2.0
    91. - docker.io/kubesphere/openpitrix-jobs:v3.2.1
    92. - docker.io/kubesphere/devops-apiserver:v3.3.0
    93. - docker.io/kubesphere/devops-controller:v3.3.0
    94. - docker.io/kubesphere/devops-tools:v3.3.0
    95. - docker.io/kubesphere/ks-jenkins:v3.3.0-2.319.1
    96. - docker.io/jenkins/inbound-agent:4.10-2
    97. - docker.io/kubesphere/builder-base:v3.2.2
    98. - docker.io/kubesphere/builder-nodejs:v3.2.0
    99. - docker.io/kubesphere/builder-maven:v3.2.0
    100. - docker.io/kubesphere/builder-maven:v3.2.1-jdk11
    101. - docker.io/kubesphere/builder-python:v3.2.0
    102. - docker.io/kubesphere/builder-go:v3.2.0
    103. - docker.io/kubesphere/builder-go:v3.2.2-1.16
    104. - docker.io/kubesphere/builder-go:v3.2.2-1.17
    105. - docker.io/kubesphere/builder-go:v3.2.2-1.18
    106. - docker.io/kubesphere/builder-base:v3.2.2-podman
    107. - docker.io/kubesphere/builder-nodejs:v3.2.0-podman
    108. - docker.io/kubesphere/builder-maven:v3.2.0-podman
    109. - docker.io/kubesphere/builder-maven:v3.2.1-jdk11-podman
    110. - docker.io/kubesphere/builder-python:v3.2.0-podman
    111. - docker.io/kubesphere/builder-go:v3.2.0-podman
    112. - docker.io/kubesphere/builder-go:v3.2.2-1.16-podman
    113. - docker.io/kubesphere/builder-go:v3.2.2-1.17-podman
    114. - docker.io/kubesphere/builder-go:v3.2.2-1.18-podman
    115. - docker.io/kubesphere/s2ioperator:v3.2.1
    116. - docker.io/kubesphere/s2irun:v3.2.0
    117. - docker.io/kubesphere/s2i-binary:v3.2.0
    118. - docker.io/kubesphere/tomcat85-java11-centos7:v3.2.0
    119. - docker.io/kubesphere/tomcat85-java11-runtime:v3.2.0
    120. - docker.io/kubesphere/tomcat85-java8-centos7:v3.2.0
    121. - docker.io/kubesphere/tomcat85-java8-runtime:v3.2.0
    122. - docker.io/kubesphere/java-11-centos7:v3.2.0
    123. - docker.io/kubesphere/java-8-centos7:v3.2.0
    124. - docker.io/kubesphere/java-8-runtime:v3.2.0
    125. - docker.io/kubesphere/java-11-runtime:v3.2.0
    126. - docker.io/kubesphere/nodejs-8-centos7:v3.2.0
    127. - docker.io/kubesphere/nodejs-6-centos7:v3.2.0
    128. - docker.io/kubesphere/nodejs-4-centos7:v3.2.0
    129. - docker.io/kubesphere/python-36-centos7:v3.2.0
    130. - docker.io/kubesphere/python-35-centos7:v3.2.0
    131. - docker.io/kubesphere/python-34-centos7:v3.2.0
    132. - docker.io/kubesphere/python-27-centos7:v3.2.0
    133. - quay.io/argoproj/argocd:v2.3.3
    134. - quay.io/argoproj/argocd-applicationset:v0.4.1
    135. - ghcr.io/dexidp/dex:v2.30.2
    136. - docker.io/library/redis:6.2.6-alpine
    137. - docker.io/jimmidyson/configmap-reload:v0.5.0
    138. - docker.io/prom/prometheus:v2.34.0
    139. - docker.io/kubesphere/prometheus-config-reloader:v0.55.1
    140. - docker.io/kubesphere/prometheus-operator:v0.55.1
    141. - docker.io/kubesphere/kube-rbac-proxy:v0.11.0
    142. - docker.io/kubesphere/kube-state-metrics:v2.3.0
    143. - docker.io/prom/node-exporter:v1.3.1
    144. - docker.io/prom/alertmanager:v0.23.0
    145. - docker.io/thanosio/thanos:v0.25.2
    146. - docker.io/grafana/grafana:8.3.3
    147. - docker.io/kubesphere/kube-rbac-proxy:v0.8.0
    148. - docker.io/kubesphere/notification-manager-operator:v1.4.0
    149. - docker.io/kubesphere/notification-manager:v1.4.0
    150. - docker.io/kubesphere/notification-tenant-sidecar:v3.2.0
    151. - docker.io/kubesphere/elasticsearch-curator:v5.7.6
    152. - docker.io/kubesphere/elasticsearch-oss:6.8.22
    153. - docker.io/kubesphere/fluentbit-operator:v0.13.0
    154. - docker.io/library/docker:19.03
    155. - docker.io/kubesphere/fluent-bit:v1.8.11
    156. - docker.io/kubesphere/log-sidecar-injector:1.1
    157. - docker.io/elastic/filebeat:6.7.0
    158. - docker.io/kubesphere/kube-events-operator:v0.4.0
    159. - docker.io/kubesphere/kube-events-exporter:v0.4.0
    160. - docker.io/kubesphere/kube-events-ruler:v0.4.0
    161. - docker.io/kubesphere/kube-auditing-operator:v0.2.0
    162. - docker.io/kubesphere/kube-auditing-webhook:v0.2.0
    163. - docker.io/istio/pilot:1.11.1
    164. - docker.io/istio/proxyv2:1.11.1
    165. - docker.io/jaegertracing/jaeger-operator:1.27
    166. - docker.io/jaegertracing/jaeger-agent:1.27
    167. - docker.io/jaegertracing/jaeger-collector:1.27
    168. - docker.io/jaegertracing/jaeger-query:1.27
    169. - docker.io/jaegertracing/jaeger-es-index-cleaner:1.27
    170. - docker.io/kubesphere/kiali-operator:v1.38.1
    171. - docker.io/kubesphere/kiali:v1.38
    172. - docker.io/library/busybox:1.31.1
    173. - docker.io/library/nginx:1.14-alpine
    174. - docker.io/joosthofman/wget:1.0
    175. - docker.io/nginxdemos/hello:plain-text
    176. - docker.io/library/wordpress:4.8-apache
    177. - docker.io/mirrorgooglecontainers/hpa-example:latest
    178. - docker.io/library/java:openjdk-8-jre-alpine
    179. - docker.io/fluent/fluentd:v1.4.2-2.0
    180. - docker.io/library/perl:latest
    181. - docker.io/kubesphere/examples-bookinfo-productpage-v1:1.16.2
    182. - docker.io/kubesphere/examples-bookinfo-reviews-v1:1.16.2
    183. - docker.io/kubesphere/examples-bookinfo-reviews-v2:1.16.2
    184. - docker.io/kubesphere/examples-bookinfo-details-v1:1.16.2
    185. - docker.io/kubesphere/examples-bookinfo-ratings-v1:1.16.3
    186. - docker.io/weaveworks/scope:1.13.0
    187. registry:
    188. auths: {}

    Note

    • If the artifact file to export contains ISO dependencies, such as conntarck and chrony, set the IP address for downloading the ISO dependencies in .repostiory.iso.url of operationSystem. Alternatively, you can download the ISO package in advance and fill in the local path in localPath and delete the url configuration item.

    • You need to enable harbor and docker-compose configuration items, which will be used when you use KubeKey to build a Harbor registry for pushing images.

    • By default, the list of images in the created manifest is obtained from docker.io.

    • You can customize the manifest-sample.yaml file to export the desired artifact file.

    • You can download the ISO files at https://github.com/kubesphere/kubekey/releases/tag/v2.2.1.

  4. Export the artifact from the source cluster.

    Run the following command directly:

    1. ./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz

    Run the following commands:

    1. export KKZONE=cn
    2. ./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz

    Note

    An artifact is a .tgz package containing the image package (.tar) and associated binaries exported from the specified manifest file. You can specify an artifact in the KubeKey commands for initializing the image registry, creating clusters, adding nodes, and upgrading clusters, and then KubeKey will automatically unpack the artifact and use the unpacked file when running the command.

    • Make sure the network connection is working.

    • KubeKey will resolve image names in the image list. If the image registry requires authentication, you can configure it in .registry.auths in the manifest file.

Install Clusters in the Air-gapped Environment

  1. Copy the downloaded KubeKey and artifact to nodes in the air-gapped environment using a USB device.

  2. Run the following command to create a configuration file for the air-gapped cluster:

    1. ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 -f config-sample.yaml
  3. Run the following command to modify the configuration file:

    1. vim config-sample.yaml

    Note

    • Modify the node information according to the actual configuration of the air-gapped environment.
    • You must specify the node where the registry to deploy (for KubeKey deployment of self-built Harbor registries).
    • In registry, the value of type must be specified as that of harbor. Otherwise, the docker registry is installed by default.
    1. apiVersion: kubekey.kubesphere.io/v1alpha2
    2. kind: Cluster
    3. metadata:
    4. name: sample
    5. spec:
    6. hosts:
    7. - {name: master, address: 192.168.149.133, internalAddress: 192.168.149.133, user: root, password: "[email protected]"}
    8. - {name: node1, address: 192.168.149.134, internalAddress: 192.168.149.134, user: root, password: "[email protected]"}
    9. roleGroups:
    10. etcd:
    11. - master
    12. control-plane:
    13. - master
    14. worker:
    15. - node1
    16. # If you want to use KubeKey to automatically deploy the image registry, set this value. You are advised to separately deploy the registry and the cluster.
    17. registry:
    18. - node1
    19. controlPlaneEndpoint:
    20. ## Internal loadbalancer for apiservers
    21. # internalLoadbalancer: haproxy
    22. domain: lb.kubesphere.local
    23. address: ""
    24. port: 6443
    25. kubernetes:
    26. version: v1.21.5
    27. clusterName: cluster.local
    28. network:
    29. plugin: calico
    30. kubePodsCIDR: 10.233.64.0/18
    31. kubeServiceCIDR: 10.233.0.0/18
    32. ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    33. multusCNI:
    34. enabled: false
    35. registry:
    36. # To use KubeKey to deploy Harbor, set the value of this parameter to harbor. If you do not set this parameter and still use KubeKey to create an container image registry, the docker registry is used by default.
    37. type: harbor
    38. # If Harbor or other registries deployed by using KubeKey requires login, you can set the auths parameter of the registry. However, if you create a docker registry using KubeKey, you do not need to set the auths parameter.
    39. # Note: If you use KubeKey to deploy Harbor, do not set this parameter until Harbor is started.
    40. #auths:
    41. # "dockerhub.kubekey.local":
    42. # username: admin
    43. # password: Harbor12345
    44. # Set the private registry to use during cluster deployment.
    45. privateRegistry: ""
    46. namespaceOverride: ""
    47. registryMirrors: []
    48. insecureRegistries: []
    49. addons: []
  4. Run the following command to install an image registry:

    1. ./kk init registry -f config-sample.yaml -a kubesphere.tar.gz

    Note

    The parameters in the command are explained as follows:

    • config-sample.yaml: Specifies the configuration file of the cluster in the air-gapped environment.

    • kubesphere.tar.gz: Specifies the image package of the source cluster.

  5. Create a Harbor project.

    Note

    As Harbor adopts the Role-based Access Control (RBAC) mechanism, which means that only specified users can perform certain operations. Therefore, you must create a project before pushing images to Harbor. Harbor supports two types of projects:

    • Public: All users can pull images from the project.

    • Private: Only project members can pull images from the project.

    The username and password for logging in to Harbor is admin and Harbor12345 by default. The installation file of Harbor is located in /opt/harbor, where you can perform O&M of Harbor.

    Method 1: Run the following commands to create a Harbor project.

    a. Run the following command to download the specified script to initialize the Harbor registry:

    1. curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh

    b. Run the following command to modify the script configuration file:

    1. vim create_project_harbor.sh
    1. #!/usr/bin/env bash
    2. # Copyright 2018 The KubeSphere Authors.
    3. #
    4. # Licensed under the Apache License, Version 2.0 (the "License");
    5. # you may not use this file except in compliance with the License.
    6. # You may obtain a copy of the License at
    7. #
    8. # http://www.apache.org/licenses/LICENSE-2.0
    9. #
    10. # Unless required by applicable law or agreed to in writing, software
    11. # distributed under the License is distributed on an "AS IS" BASIS,
    12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    13. # See the License for the specific language governing permissions and
    14. # limitations under the License.
    15. url="https://dockerhub.kubekey.local" #Change the value of url to https://dockerhub.kubekey.local.
    16. user="admin"
    17. passwd="Harbor12345"
    18. harbor_projects=(library
    19. kubesphereio
    20. kubesphere
    21. calico
    22. coredns
    23. openebs
    24. csiplugin
    25. minio
    26. mirrorgooglecontainers
    27. osixia
    28. prom
    29. thanosio
    30. jimmidyson
    31. grafana
    32. elastic
    33. istio
    34. jaegertracing
    35. jenkins
    36. weaveworks
    37. openpitrix
    38. joosthofman
    39. nginxdemos
    40. fluent
    41. kubeedge
    42. )
    43. for project in "${harbor_projects[@]}"; do
    44. echo "creating $project"
    45. curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #Add -k at the end of the curl command.
    46. done

    Note

    • Change the value of url to https://dockerhub.kubekey.local.

    • The project name of the registry must be the same as that of the image list.

    • Add -k at the end of the curl command.

    c. Run the following commands to create a Harbor project:

    1. chmod +x create_project_harbor.sh
    1. ./create_project_harbor.sh

    Method 2: Log in to Harbor and create a project. Set the project to Public, so that any user can pull images from this project. For more information, please refer to Create Projects.

    harbor-login

  6. Run the following command again to modify the cluster configuration file:

    1. vim config-sample.yaml

    Note

    • In auths, add dockerhub.kubekey.local and the username and password.
    • In privateRegistry, add dockerhub.kubekey.local.
    1. ...
    2. registry:
    3. type: harbor
    4. auths:
    5. "dockerhub.kubekey.local":
    6. username: admin
    7. password: Harbor12345
    8. privateRegistry: "dockerhub.kubekey.local"
    9. namespaceOverride: "kubesphereio"
    10. registryMirrors: []
    11. insecureRegistries: []
    12. addons: []

    Note

    • In auths, enter dockerhub.kubekey.local, username (admin) and password (Harbor12345).
    • In privateRegistry, enter dockerhub.kubekey.local.
    • In namespaceOverride, enter kubesphereio.
  7. Run the following command to install a KubeSphere cluster:

    1. ./kk create cluster -f config-sample1.yaml -a kubesphere.tar.gz --with-packages

    The parameters are explained as follows:

    • config-sample.yaml: Specifies the configuration file for the cluster in the air-gapped environment.
    • kubesphere.tar.gz: Specifies the tarball image from which the source cluster is packaged.
    • --with-packages: This parameter is required if you want to install the ISO dependencies.
  8. Run the following command to view the cluster status:

    1. $ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

    After the installation is completed, the following information is displayed:

    1. **************************************************
    2. #####################################################
    3. ### Welcome to KubeSphere! ###
    4. #####################################################
    5. Console: http://192.168.149.133:30880
    6. Account: admin
    7. Password: [email protected]
    8. NOTES
    9. 1. After you log into the console, please check the
    10. monitoring status of service components in
    11. the "Cluster Management". If any service is not
    12. ready, please wait patiently until all components
    13. are up and running.
    14. 1. Please change the default password after login.
    15. #####################################################
    16. https://kubesphere.io 2022-02-28 23:30:06
    17. #####################################################
  9. Access KubeSphere’s web console at http://{IP}:30880 using the default account and password admin/[[email protected]](https://kubesphere.io/cdn-cgi/l/email-protection).

login

Note

To access the console, make sure that port 30880 is enabled in your security group.