Deploy KubeSphere on Azure VM Instance

Before You Begin

Technically, you can either install and manage Kubernetes yourself or adopt a managed Kubernetes solution. If you are looking for a hands-off approach to taking advantage of Kubernetes, a fully-managed platform solution may suit you best. Please see Deploy KubeSphere on AKS for more details. However, if you want a bit more control over your configuration and set up a highly-available cluster on Azure, this instruction will help you to create a production-ready Kubernetes and KubeSphere cluster.

Introduction

In this tutorial, we will use two key features of Azure virtual machines (VMs):

  • Virtual Machine Scale Sets (VMSS): Azure VMSS let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule (Kubernetes Autoscaler is available, but not covered in this tutorial. See autoscaler for more details), which perfectly fits Worker nodes.
  • Availability Sets: An availability set is a logical grouping of VMs within a datacenter that are automatically distributed across fault domains. This approach limits the impact of potential physical hardware failures, network outages, or power interruptions. All the Master and ETCD VMs will be placed in an availability set to achieve high availability.

Besides those VMs, other resources like Load Balancer, Virtual Network and Network Security Group will be involved.

Prerequisites

  • You need an Azure account to create all the resources.
  • Basic knowledge of Azure Resource Manager (ARM) templates, which are files that define the infrastructure and configuration for your project.
  • Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.

Architecture

Six machines of Ubuntu 18.04 will be deployed in Azure Resource Group. Three of them are grouped into an availability set, serving as both Master and ETCD of the Kubernetes control plane. Another three VMs will be defined as a VMSS where Worker nodes will be running.

Architecture

Those VMs will be attached to a load balancer. There are two predefined rules in the LB:

  • Inbound NAT: ssh port will be mapped for each machine so that you can easily manage VMs.
  • Load Balancing: http and https ports will be mapped to Node pools by default. Other ports can be added on demand.
ServiceProtocolRuleBackend PortFrontend Port/PortsPools
sshTCPInbound NAT2250200, 50201,50202, 50100~50199Master, Node
apiserverTCPLoad Balancing64436443Master
ks-consoleTCPLoad Balancing3088030880Master
httpTCPLoad Balancing8080Node
httpsTCPLoad Balancing443443Node

Deploy HA Cluster Infrastructrue

You don’t have to create those resources one by one with wizards. According to the best practice of infrastructure as code on Azure, all resources in the architecture are already defined as ARM templates.

Start to deploy with one click

Click the Deploy button below, and you will be redirected to Azure and asked to fill in deployment parameters.

Deploy to Azure Visualize

Change template parameters

Only few parameters need to be changed.

  • Click Create new under Resource group and enter a name such as “KubeSphereVMRG”.
  • Enter Admin Username.
  • Copy your public ssh key for the field Admin Key. Alternatively, create a new one with ssh-keygen.

Deploy KubeSphere on Azure VM Instance - 图4

Note

Password authentication is restricted in the Linux configuration. Only SSH is acceptable.

Click the Purchase button at the bottom when you are ready to continue.

Review Azure Resources in the Portal

Once the deployment succeeds, you can find all the resources you need in the resource group KubeSphereVMRG. Take your time and check them one by one if you are new to Azure. Record the public IP of LB and private IP addresses of the VMs. You will need them in the next step.

New Created Resources

Deploy Kubernetes and KubeSphere

You can execute the following command on your device or connect to one of the Master VMs through ssh. During the installation, files will be downloaded and distributed to each VM. The installation will be much faster if you use KubeKey in the Intranet than the Internet.

  1. # copy your private ssh to master-0
  2. scp -P 50200 ~/.ssh/id_rsa [email protected]:/home/kubesphere/.ssh/
  3. # ssh to the master-0
  4. ssh -i .ssh/id_rsa2 -p50200 [email protected]

Download KubeKey

Kubekey is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere v3.0.0.

  1. Download it so that you can generate a configuration file in the next step.

Download KubeKey using the following command:

  1. wget -c https://kubesphere.io/download/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz

Download KubeKey from GitHub Release Page or use the following command directly:

  1. wget https://github.com/kubesphere/kubekey/releases/download/v1.0.0/kubekey-v1.0.0-linux-amd64.tar.gz -O - | tar -xz

Make kk executable:

  1. chmod +x kk
  1. Create an example configuration file with default configurations. Here Kubernetes v1.17.9 is used as an example.
  1. ./kk create config --with-kubesphere v3.0.0 --with-kubernetes v1.17.9

Note

These Kubernetes versions have been fully tested with KubeSphere: v1.15.12, v1.16.13, v1.17.9 (default), and v1.18.6.

config-sample.yaml Example

  1. spec:
  2. hosts:
  3. - {name: master-0, address: 40.81.5.xx, port: 50200, internalAddress: 10.0.1.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  4. - {name: master-1, address: 40.81.5.xx, port: 50201, internalAddress: 10.0.1.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  5. - {name: master-2, address: 40.81.5.xx, port: 50202, internalAddress: 10.0.1.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  6. - {name: node000000, address: 40.81.5.xx, port: 50100, internalAddress: 10.0.0.4, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  7. - {name: node000001, address: 40.81.5.xx, port: 50101, internalAddress: 10.0.0.5, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  8. - {name: node000002, address: 40.81.5.xx, port: 50102, internalAddress: 10.0.0.6, user: kubesphere, privateKeyPath: "~/.ssh/id_rsa"}
  9. roleGroups:
  10. etcd:
  11. - master-0
  12. - master-1
  13. - master-2
  14. master:
  15. - master-0
  16. - master-1
  17. - master-2
  18. worker:
  19. - node000000
  20. - node000001
  21. - node000002

For a complete configuration sample explanation, please see this file.

Configure the Load Balancer

In addition to the node information, you need to provide the load balancer information in the same yaml file. For the IP address, you can find it in Azure -> KubeSphereVMRG -> PublicLB. Assume the IP address and listening port of the load balancer are 40.81.5.xx and 6443 respectively, and you can refer to the following example.

The configuration example in config-sample.yaml

  1. ## Public LB config example
  2. ## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
  3. controlPlaneEndpoint:
  4. domain: lb.kubesphere.local
  5. address: "40.81.5.xx"
  6. port: "6443"

Note

The public load balancer is used directly instead of an internal load balancer due to Azure Load Balancer limits.

Persistent Storage Plugin Configuration

See Storage Configuration for details.

Configure the Network Plugin

Azure Virtual Network doesn’t support IPIP mode used by calico. You need to change the network plugin to flannel.

  1. network:
  2. plugin: flannel
  3. kubePodsCIDR: 10.233.64.0/18
  4. kubeServiceCIDR: 10.233.0.0/18

Start to Bootstrap a Cluster

After you complete the configuration, you can execute the following command to start the installation:

  1. ./kk create cluster -f config-sample.yaml

Inspect the logs of installation:

  1. kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

When the installation finishes, you can see the following message:

  1. #####################################################
  2. ### Welcome to KubeSphere! ###
  3. #####################################################
  4. Console: http://10.128.0.44:30880
  5. Account: admin
  6. Password: [email protected]
  7. NOTES
  8. 1. After logging into the console, please check the
  9. monitoring status of service components in
  10. the "Cluster Management". If any service is not
  11. ready, please wait patiently until all components
  12. are ready.
  13. 2. Please modify the default password after login.
  14. #####################################################
  15. https://kubesphere.io 2020-xx-xx xx:xx:xx

Access KubeSphere Console

Congratulation! Now you can access the KubeSphere console using http://10.128.0.44:30880 (Replace the IP with yours).

Add Additional Ports

Since we are using self-hosted Kubernetes solutions on Azure, the Load Balancer is not integrated with Kubernetes Service. However, you can still manually map the Nodeport to the PublicLB. There are 2 steps required.

  1. Create a new Load Balance Rule in the Load Balancer. Load Balancer
  2. Create an Inbound Security rule to allow Internet access in the Network Security Group. Firewall