Configuration options

Control plane

k0s Control plane can be configured via a YAML config file. By default k0s server command reads a file called k0s.yaml but can be told to read any yaml file via --config option.

An example config file with the most common options users should configure:

  1. apiVersion: k0s.k0sproject.io/v1beta1
  2. kind: Cluster
  3. metadata:
  4. name: k0s
  5. spec:
  6. api:
  7. address: 192.168.68.106
  8. sans:
  9. - my-k0s-control.my-domain.com
  10. network:
  11. podCIDR: 10.244.0.0/16
  12. serviceCIDR: 10.96.0.0/12
  13. extensions:
  14. helm:
  15. repositories:
  16. - name: prometheus-community
  17. url: https://prometheus-community.github.io/helm-charts
  18. charts:
  19. - name: prometheus-stack
  20. chartname: prometheus-community/prometheus
  21. version: "11.16.8"
  22. namespace: default

spec.api

  • address: The local address to bind API on. Also used as one of the addresses pushed on the k0s create service certificate on the API. Defaults to first non-local address found on the node.
  • sans: List of additional addresses to push to API servers serving certificate

spec.network

  • podCIDR: Pod network CIDR to be used in the cluster
  • serviceCIDR: Network CIDR to be used for cluster VIP services.

extensions.helm

List of Helm repositories and charts to deploy during cluster bootstrap. This example configures Prometheus from “stable” Helms chart repository.

Configuring an HA Control Plane

The following pre-requisites are required in order to configure an HA control plane:

Requirements

Load Balancer

A load balancer with a single external address should be configured as the IP gateway for the controllers. The load balancer should allow traffic to each controller on the following ports:

  • 6443
  • 8132
  • 8133
  • 9443
Cluster configuration

On each controller node, a k0s.yaml configuration file should be configured. The following options need to match on each node, otherwise the control plane components will end up in very unknown states:

  • network
  • storage: Needless to say, one cannot create a clustered controlplane with each node only storing data locally on SQLite.
  • externalAddress

Full config reference

Note: Many of the options configure things deep down in the “stack” on various components. So please make sure you understand what is being configured and whether or not it works in your specific environment.

A full config file with defaults generated by the k0s default-config command:

  1. apiVersion: k0s.k0sproject.io/v1beta1
  2. kind: Cluster
  3. metadata:
  4. name: k0s
  5. spec:
  6. api:
  7. externalAddress: my-lb-address.example.com
  8. address: 192.168.68.106
  9. sans:
  10. - 192.168.68.106
  11. extraArgs: {}
  12. controllerManager:
  13. extraArgs: {}
  14. scheduler:
  15. extraArgs: {}
  16. storage:
  17. type: etcd
  18. etcd:
  19. peerAddress: 192.168.68.106
  20. network:
  21. podCIDR: 10.244.0.0/16
  22. serviceCIDR: 10.96.0.0/12
  23. provider: calico
  24. calico:
  25. mode: vxlan
  26. vxlanPort: 4789
  27. vxlanVNI: 4096
  28. mtu: 1450
  29. wireguard: false
  30. flexVolumeDriverPath: /usr/libexec/k0s/kubelet-plugins/volume/exec/nodeagent~uds
  31. ipAutodetectionMethod: ""
  32. podSecurityPolicy:
  33. defaultPolicy: 00-k0s-privileged
  34. workerProfiles: []
  35. images:
  36. konnectivity:
  37. image: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent
  38. version: v0.0.13
  39. metricsserver:
  40. image: gcr.io/k8s-staging-metrics-server/metrics-server
  41. version: v0.3.7
  42. kubeproxy:
  43. image: k8s.gcr.io/kube-proxy
  44. version: v1.20.2
  45. coredns:
  46. image: docker.io/coredns/coredns
  47. version: 1.7.0
  48. calico:
  49. cni:
  50. image: calico/cni
  51. version: v3.16.2
  52. flexvolume:
  53. image: calico/pod2daemon-flexvol
  54. version: v3.16.2
  55. node:
  56. image: calico/node
  57. version: v3.16.2
  58. kubecontrollers:
  59. image: calico/kube-controllers
  60. version: v3.16.2
  61. repository: ""
  62. telemetry:
  63. interval: 10m0s
  64. enabled: true
  65. extensions:
  66. helm:
  67. repositories:
  68. - name: stable
  69. url: https://charts.helm.sh/stable
  70. - name: prometheus-community
  71. url: https://prometheus-community.github.io/helm-charts
  72. charts:
  73. - name: prometheus-stack
  74. chartname: prometheus-community/prometheus
  75. version: "11.16.8"
  76. values: |
  77. server:
  78. podDisruptionBudget:
  79. enabled: false
  80. namespace: default

spec.api

  • externalAddress: If k0s controllers are running behind a loadbalancer provide the loadbalancer address here. This will configure all cluster components to connect to this address and also configures this address to be used when joining new nodes into the cluster.
  • address: The local address to bind API on. Also used as one of the addresses pushed on the k0s create service certificate on the API. Defaults to first non-local address found on the node.
  • sans: List of additional addresses to push to API servers serving certificate
  • extraArgs: Map of key-values (strings) for any extra arguments you wish to pass down to Kubernetes api-server process

spec.controllerManager

  • extraArgs: Map of key-values (strings) for any extra arguments you wish to pass down to Kubernetes controller manager process

spec.scheduler

  • extraArgs: Map of key-values (strings) for any extra arguments you wish to pass down to Kubernetes scheduler process

spec.storage

  • type: Type of the data store, either etcd or kine.
  • etcd.peerAddress: Nodes address to be used for etcd cluster peering.
  • kine.dataSource: kine datasource URL.

Using type etcd will make k0s to create and manage an elastic etcd cluster within the controller nodes.

spec.network

  • provider: Network provider, either calico or custom. In case of custom user can push any network provider.
  • podCIDR: Pod network CIDR to be used in the cluster
  • serviceCIDR: Network CIDR to be used for cluster VIP services.

Note: In case of custom network it’s fully in users responsibility to configure ALL the CNI related setups. This includes the CNI provider itself plus all the host levels setups it might need such as CNI binaries.

spec.network.calico

  • mode: vxlan (default) or ipip
  • vxlanPort: The UDP port to use for VXLAN (default 4789)
  • vxlanVNI: The virtual network ID to use for VXLAN. (default: 4096)
  • mtu: MTU to use for overlay network (default 1450)
  • wireguard: enable wireguard based encryption (default false). Your host system must be wireguard ready. See https://docs.projectcalico.org/security/encrypt-cluster-pod-traffic for details.
  • flexVolumeDriverPath: The host path to use for Calicos flex-volume-driver (default: /usr/libexec/k0s/kubelet-plugins/volume/exec/nodeagent~uds). This should only need to be changed if the default path is unwriteable. See https://github.com/projectcalico/calico/issues/2712 for details. This option should ideally be paired with a custom volumePluginDir in the profile used on your worker nodes.
  • ipAutodetectionMethod: To force non-default behaviour for Calico to pick up the interface for pod network inter-node routing. (default "", i.e. not set so Calico will use it’s own defaults) See more at: https://docs.projectcalico.org/reference/node/configuration#ip-autodetection-methods

spec.podSecurityPolicy

Configures the default psp to be set. k0s creates two PSPs out of box:

  • 00-k0s-privileged (default): no restrictions, always also used for Kubernetes/k0s level system pods
  • 99-k0s-restricted: no host namespaces or root users allowed, no bind mounts from host

As a user you can of course create any supplemental PSPs and bind them to users / access accounts as you need.

spec.workerProfiles

Array of spec.workerProfiles.workerProfile Each element has following properties: - name: string, name, used as profile selector for the worker process - values: mapping object

For each profile the control plane will create separate ConfigMap with kubelet-config yaml. Based on the --profile argument given to the k0s worker the corresponding ConfigMap would be used to extract kubelet-config.yaml from. values are recursively merged with default kubelet-config.yaml

There are a few fields that cannot be overridden: - clusterDNS - clusterDomain - apiVersion - kind

Example:

  1. workerProfiles:
  2. - name: custom-role
  3. values:
  4. key: value
  5. mapping:
  6. innerKey: innerValue

Custom volumePluginDir:

  1. workerProfiles:
  2. - name: custom-role
  3. values:
  4. volumePluginDir: /var/libexec/k0s/kubelet-plugins/volume/exec

images

Each node under the images key has the same structure

  1. images:
  2. konnectivity:
  3. image: calico/kube-controllers
  4. version: v3.16.2

Following keys are available:

  • images.konnectivity
  • images.metricsserver
  • images.kubeproxy
  • images.coredns
  • images.calico.cni
  • images.calico.flexvolume
  • images.calico.node
  • images.calico.kubecontrollers
  • images.repository

If images.repository is set and not empty, every image will be pulled from images.repository

Example:

  1. images:
  2. repository: "my.own.repo"
  3. konnectivity:
  4. image: calico/kube-controllers
  5. version: v3.16.2
  6. metricsserver:
  7. image: gcr.io/k8s-staging-metrics-server/metrics-server
  8. version: v0.3.7

In the runtime the image names will be calculated as my.own.repo/calico/kube-controllers:v3.16.2 and my.own.repo/k8s-staging-metrics-server/metrics-server.

This only affects the location where images are getting pulled, omitting an image specification here will not disable the component from being deployed.

Extensions

As stated in the project scope we intent to keep the scope of k0s quite small and not build gazillions of extensions into the product itself.

To run k0s easily with your preferred extensions you have two options.

  1. Dump all needed extension manifest under /var/lib/k0s/manifests/my-extension. Read more on this approach here.
  2. Define your extensions as Helm charts:
  1. extensions:
  2. helm:
  3. repositories:
  4. - name: stable
  5. url: https://charts.helm.sh/stable
  6. - name: prometheus-community
  7. url: https://prometheus-community.github.io/helm-charts
  8. charts:
  9. - name: prometheus-stack
  10. chartname: prometheus-community/prometheus
  11. version: "11.16.8"
  12. values: |
  13. storageSpec:
  14. emptyDir:
  15. medium: Memory
  16. namespace: default

This way you get a declarative way to configure the cluster and k0s controller manages the setup of the defined extension Helm charts as part of the cluster bootstrap process.

Some examples what you could use as extension charts: - Ingress controllers: Nginx ingress, Traefix ingress (tutorial), - Volume storage providers: OpenEBS, Rook, Longhorn - Monitoring: Prometheus, Grafana

Telemetry

To build better end user experience we collect and send telemetry data from clusters. It is enabled by default and can be disabled by settings corresponding option as false The default interval is 10 minutes, any valid value for time.Duration string representation can be used as a value. Example

  1. telemetry:
  2. interval: 2m0s
  3. enabled: true