Guide for adding Windows Nodes in Kubernetes

The Kubernetes platform can now be used to run both Linux and Windows containers. This page shows how one or more Windows nodes can be registered to a cluster.

Objectives

  • Register a Windows node to the cluster
  • Configure networking so Pods and Services on Linux and Windows can communicate with each other

Before you begin

  • Obtain a Windows Server 2019 license (or higher) in order to configure the Windows node that hosts Windows containers. You can use your organization’s licenses for the cluster, or acquire one from Microsoft, a reseller, or via the major cloud providers such as GCP, AWS, and Azure by provisioning a virtual machine running Windows Server through their marketplaces. A time-limited trial is also available.

  • Build a Linux-based Kubernetes cluster in which you have access to the control-plane (some examples include Creating a single control-plane cluster with kubeadm, AKS Engine, GCE, AWS.

Getting Started: Adding a Windows Node to Your Cluster

Plan IP Addressing

Kubernetes cluster management requires careful planning of your IP addresses so that you do not inadvertently cause network collision. This guide assumes that you are familiar with the Kubernetes networking concepts.

In order to deploy your cluster you need the following address spaces:

Subnet / address rangeDescriptionDefault value
Service SubnetA non-routable, purely virtual subnet that is used by pods to uniformly access services without caring about the network topology. It is translated to/from routable address space by kube-proxy running on the nodes.10.96.0.0/12
Cluster SubnetThis is a global subnet that is used by all pods in the cluster. Each node is assigned a smaller /24 subnet from this for their pods to use. It must be large enough to accommodate all pods used in your cluster. To calculate minimumsubnet size: (number of nodes) + (number of nodes maximum pods per node that you configure). Example: for a 5 node cluster for 100 pods per node: (5) + (5 100) = 505.10.244.0.0/16
Kubernetes DNS Service IPIP address of kube-dns service that is used for DNS resolution & cluster service discovery.10.96.0.10

Review the networking options supported in ‘Intro to Windows containers in Kubernetes: Supported Functionality: Networking’ to determine how you need to allocate IP addresses for your cluster.

Components that run on Windows

While the Kubernetes control-plane runs on your Linux node(s), the following components are configured and run on your Windows node(s).

  1. kubelet
  2. kube-proxy
  3. kubectl (optional)
  4. Container runtime

Get the latest binaries from https://github.com/kubernetes/kubernetes/releases, starting with v1.14 or later. The Windows-amd64 binaries for kubeadm, kubectl, kubelet, and kube-proxy can be found under the CHANGELOG link.

Networking Configuration

Once you have a Linux-based Kubernetes control-plane (“Master”) node you are ready to choose a networking solution. This guide illustrates using Flannel in VXLAN mode for simplicity.

Configuring Flannel in VXLAN mode on the Linux control-plane

  1. Prepare Kubernetes master for Flannel

    Some minor preparation is recommended on the Kubernetes master in our cluster. It is recommended to enable bridged IPv4 traffic to iptables chains when using Flannel. This can be done using the following command:

    1. sudo sysctl net.bridge.bridge-nf-call-iptables=1
  2. Download & configure Flannel

    Download the most recent Flannel manifest:

    1. wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    There are two sections you should modify to enable the vxlan networking backend:

    After applying the steps below, the net-conf.json section of kube-flannel.yml should look as follows:

    1. net-conf.json: |
    2. {
    3. "Network": "10.244.0.0/16",
    4. "Backend": {
    5. "Type": "vxlan",
    6. "VNI" : 4096,
    7. "Port": 4789
    8. }
    9. }

    Note: The VNI must be set to 4096 and port 4789 for Flannel on Linux to interoperate with Flannel on Windows. Support for other VNIs is coming soon. See the VXLAN documentation for an explanation of these fields.

  3. In the net-conf.json section of your kube-flannel.yml, double-check:

    1. The cluster subnet (e.g. “10.244.0.0/16”) is set as per your IP plan.
      • VNI 4096 is set in the backend
      • Port 4789 is set in the backend
    2. In the cni-conf.json section of your kube-flannel.yml, change the network name to vxlan0.

    Your cni-conf.json should look as follows:

    1. cni-conf.json: |
    2. {
    3. "name": "vxlan0",
    4. "plugins": [
    5. {
    6. "type": "flannel",
    7. "delegate": {
    8. "hairpinMode": true,
    9. "isDefaultGateway": true
    10. }
    11. },
    12. {
    13. "type": "portmap",
    14. "capabilities": {
    15. "portMappings": true
    16. }
    17. }
    18. ]
    19. }
  4. Apply the Flannel manifest and validate

    Let’s apply the Flannel configuration:

    1. kubectl apply -f kube-flannel.yml

    After a few minutes, you should see all the pods as running if the Flannel pod network was deployed.

    1. kubectl get pods --all-namespaces

    The output looks like as follows:

    1. NAMESPACE NAME READY STATUS RESTARTS AGE
    2. kube-system etcd-flannel-master 1/1 Running 0 1m
    3. kube-system kube-apiserver-flannel-master 1/1 Running 0 1m
    4. kube-system kube-controller-manager-flannel-master 1/1 Running 0 1m
    5. kube-system kube-dns-86f4d74b45-hcx8x 3/3 Running 0 12m
    6. kube-system kube-flannel-ds-54954 1/1 Running 0 1m
    7. kube-system kube-proxy-Zjlxz 1/1 Running 0 1m
    8. kube-system kube-scheduler-flannel-master 1/1 Running 0 1m

    Verify that the Flannel DaemonSet has the NodeSelector applied.

    1. kubectl get ds -n kube-system

    The output looks like as follows. The NodeSelector beta.kubernetes.io/os=linux is applied.

    1. NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    2. kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux 21d
    3. kube-proxy 2 2 2 2 2 beta.kubernetes.io/os=linux 26d

Join Windows Worker Node

In this section we’ll cover configuring a Windows node from scratch to join a cluster on-prem. If your cluster is on a cloud you’ll likely want to follow the cloud specific guides in the public cloud providers section.

Preparing a Windows Node

Note: All code snippets in Windows sections are to be run in a PowerShell environment with elevated permissions (Administrator) on the Windows worker node.

  1. Download the SIG Windows tools repository containing install and join scripts

    1. [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    2. Start-BitsTransfer https://github.com/kubernetes-sigs/sig-windows-tools/archive/master.zip
    3. tar -xvf .\master.zip --strip-components 3 sig-windows-tools-master/kubeadm/v1.15.0/*
    4. Remove-Item .\master.zip
  2. Customize the Kubernetes configuration file

    1. {
    2. "Cri" : { // Contains values for container runtime and base container setup
    3. "Name" : "dockerd", // Container runtime name
    4. "Images" : {
    5. "Pause" : "mcr.microsoft.com/k8s/core/pause:1.2.0", // Infrastructure container image
    6. "Nanoserver" : "mcr.microsoft.com/windows/nanoserver:1809", // Base Nanoserver container image
    7. "ServerCore" : "mcr.microsoft.com/windows/servercore:ltsc2019" // Base ServerCore container image
    8. }
    9. },
    10. "Cni" : { // Contains values for networking executables
    11. "Name" : "flannel", // Name of network fabric
    12. "Source" : [{ // Contains array of objects containing values for network daemon(s)
    13. "Name" : "flanneld", // Name of network daemon
    14. "Url" : "https://github.com/coreos/flannel/releases/download/v0.11.0/flanneld.exe" // Direct URL pointing to network daemon executable
    15. }
    16. ],
    17. "Plugin" : { // Contains values for CNI network plugin
    18. "Name": "vxlan" // Backend network mechanism to use: ["vxlan" | "bridge"]
    19. },
    20. "InterfaceName" : "Ethernet" // Designated network interface name on Windows node to use as container network
    21. },
    22. "Kubernetes" : { // Contains values for Kubernetes node binaries
    23. "Source" : { // Contains values for Kubernetes node binaries
    24. "Release" : "1.15.0", // Version of Kubernetes node binaries
    25. "Url" : "https://dl.k8s.io/v1.15.0/kubernetes-node-windows-amd64.tar.gz" // Direct URL pointing to Kubernetes node binaries tarball
    26. },
    27. "ControlPlane" : { // Contains values associated with Kubernetes control-plane ("Master") node
    28. "IpAddress" : "kubemasterIP", // IP address of control-plane ("Master") node
    29. "Username" : "localadmin", // Username on control-plane ("Master") node with remote SSH access
    30. "KubeadmToken" : "token", // Kubeadm bootstrap token
    31. "KubeadmCAHash" : "discovery-token-ca-cert-hash" // Kubeadm CA key hash
    32. },
    33. "KubeProxy" : { // Contains values for Kubernetes network proxy configuration
    34. "Gates" : "WinOverlay=true" // Comma-separated key-value pairs passed to kube-proxy feature gate flag
    35. },
    36. "Network" : { // Contains values for IP ranges in CIDR notation for Kubernetes networking
    37. "ServiceCidr" : "10.96.0.0/12", // Service IP subnet used by Services in CIDR notation
    38. "ClusterCidr" : "10.244.0.0/16" // Cluster IP subnet used by Pods in CIDR notation
    39. }
    40. },
    41. "Install" : { // Contains values and configurations for Windows node installation
    42. "Destination" : "C:\\ProgramData\\Kubernetes" // Absolute DOS path where Kubernetes will be installed on the Windows node
    43. }
    44. }

Note: Users can generate values for the ControlPlane.KubeadmToken and ControlPlane.KubeadmCAHash fields by running kubeadm token create --print-join-command on the Kubernetes control-plane (“Master”) node.

  1. Install containers and Kubernetes (requires a system reboot)

Use the previously downloaded KubeCluster.ps1 script to install Kubernetes on the Windows Server container host:

  1. .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -install

where -ConfigFile points to the path of the Kubernetes configuration file.

Note: In the example below, we are using overlay networking mode. This requires Windows Server version 2019 with KB4489899 and at least Kubernetes v1.14 or above. Users that cannot meet this requirement must use L2bridge networking instead by selecting bridge as the plugin in the configuration file.

alt_text

On the Windows node you target, this step will:

  1. Enable Windows Server containers role (and reboot)
  2. Download and install the chosen container runtime
  3. Download all needed container images
  4. Download Kubernetes binaries and add them to the $PATH environment variable
  5. Download CNI plugins based on the selection made in the Kubernetes Configuration file
  6. (Optionally) Generate a new SSH key which is required to connect to the control-plane (“Master”) node during joining

    Note: For the SSH key generation step, you also need to add the generated public SSH key to the authorized_keys file on your (Linux) control-plane node. You only need to do this once. The script prints out the steps you can follow to do this, at the end of its output.

Once installation is complete, any of the generated configuration files or binaries can be modified before joining the Windows node.

Join the Windows Node to the Kubernetes cluster

This section covers how to join a Windows node with Kubernetes installed with an existing (Linux) control-plane, to form a cluster.

Use the previously downloaded KubeCluster.ps1 script to join the Windows node to the cluster:

  1. .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -join

where -ConfigFile points to the path of the Kubernetes configuration file.

alt_text

Note: Should the script fail during the bootstrap or joining procedure for whatever reason, start a new PowerShell session before starting each consecutive join attempt.

This step will perform the following actions:

  1. Connect to the control-plane (“Master”) node via SSH, to retrieve the Kubeconfig file file.
  2. Register kubelet as a Windows service
  3. Configure CNI network plugins
  4. Create an HNS network on top of the chosen network interface

    Note: This may cause a network blip for a few seconds while the vSwitch is being created.

  5. (If vxlan plugin is selected) Open up inbound firewall UDP port 4789 for overlay traffic

  6. Register flanneld as a Windows service
  7. Register kube-proxy as a Windows service

Now you can view the Windows nodes in your cluster by running the following:

  1. kubectl get nodes

Remove the Windows Node from the Kubernetes cluster

In this section we’ll cover how to remove a Windows node from a Kubernetes cluster.

Use the previously downloaded KubeCluster.ps1 script to remove the Windows node from the cluster:

  1. .\KubeCluster.ps1 -ConfigFile .\Kubeclustervxlan.json -reset

where -ConfigFile points to the path of the Kubernetes configuration file.

alt_text

This step will perform the following actions on the targeted Windows node:

  1. Delete the Windows node from the Kubernetes cluster
  2. Stop all running containers
  3. Remove all container networking (HNS) resources
  4. Unregister all Kubernetes services (flanneld, kubelet, kube-proxy)
  5. Delete all Kubernetes binaries (kube-proxy.exe, kubelet.exe, flanneld.exe, kubeadm.exe)
  6. Delete all CNI network plugins binaries
  7. Delete Kubeconfig file used to access the Kubernetes cluster

Public Cloud Providers

Azure

AKS-Engine can deploy a complete, customizable Kubernetes cluster with both Linux & Windows nodes. There is a step-by-step walkthrough available in the docs on GitHub.

GCP

Users can easily deploy a complete Kubernetes cluster on GCE following this step-by-step walkthrough on GitHub

Deployment with kubeadm and cluster API

Kubeadm is becoming the de facto standard for users to deploy a Kubernetes cluster. Windows node support in kubeadm is an alpha feature since Kubernetes release v1.16. We are also making investments in cluster API to ensure Windows nodes are properly provisioned. For more details, please consult the kubeadm for Windows KEP.

Next Steps

Now that you’ve configured a Windows worker in your cluster to run Windows containers you may want to add one or more Linux nodes as well to run Linux containers. You are now ready to schedule Windows containers on your cluster.

Feedback

Was this page helpful?

Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.