k0s Networking

In-cluster networking

k0s supports currently only Calico as the built-in in-cluster overlay network provider. A user can however opt-out of k0s managing the network setup by using a custom as the network type.

Using custom network provider it is expected that the user sets up the networking. This can be achieved e.g. by pushing network provider manifests into /var/lib/k0s/manifests from where k0s controllers will pick them up and deploy into the cluster. More on the automatic manifest handling here.

Controller(s) - Worker communication

As one of the goals of k0s is to allow deployment of totally isolated control plane we cannot rely on the fact that there is an IP route between controller nodes and the pod overlay network. To enable this communication path, which is mandated by conformance tests, we use Egress service and konnectivity proxy to proxy the traffic from API server into worker nodes. This ansures that we can always fulfill all the Kubernetes API functionalities but still operate the control plane in total isolation from the workers.

Needed open ports & protocols

ProtocolPortServiceDirectionNotes
TCP2380etcd peerscontroller <-> controller
TCP6443kube-apiserverWorker, CLI => controllerauthenticated kube API using kube TLS client certs, ServiceAccount tokens with RBAC
UDP4789Calicoworker <-> workerCalico VXLAN overlay
TCP10250kubeletMaster, Worker => Host *authenticated kubelet API for the master node kube-apiserver (and heapster/metrics-server addons) using TLS client certs
TCP9443k0s-apicontroller <-> controllerk0s controller join API, TLS with token auth
TCP8132,8133konnectivity serverworker <-> controllerkonnectivity is used as “reverse” tunnel between kube-apiserver and worker kubelets