Known Issues

The Known Issues are updated periodically and designed to inform you about any issues that may not be immediately addressed in the next upcoming release.

Snap Docker

If you plan to use K3s with docker, Docker installed via a snap package is not recommended as it has been known to cause issues running K3s.

Iptables

If you are running iptables in nftables mode instead of legacy you might encounter issues. We recommend utilizing newer iptables (such as 1.6.1+) to avoid issues.

Additionally, versions 1.8.0-1.8.4 have known issues that can cause K3s to fail. See Additional OS Preparations for workarounds.

Rootless Mode

Running K3s with Rootless mode is experimental and has several known issues.

Upgrading Hardened Clusters from v1.24.x to v1.25.x

Kubernetes removed PodSecurityPolicy from v1.25 in favor of Pod Security Standards. You can read more about PSS in the upstream documentation. For K3S, there are some manual steps that must be taken if any PodSecurityPoliciy has been configured on the nodes.

  1. On all nodes, update the kube-apiserver-arg value to remove the PodSecurityPolicy admission-plugin. Add the following arg value instead: 'admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml', but do NOT restart or upgrade K3S yet. Below is an example of what a configuration file might look like after this update for the node to be hardened:
  1. protect-kernel-defaults: true
  2. secrets-encryption: true
  3. kube-apiserver-arg:
  4. - 'admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml'
  5. - 'audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log'
  6. - 'audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml'
  7. - 'audit-log-maxage=30'
  8. - 'audit-log-maxbackup=10'
  9. - 'audit-log-maxsize=100'
  10. - 'request-timeout=300s'
  11. - 'service-account-lookup=true'
  12. kube-controller-manager-arg:
  13. - 'terminated-pod-gc-threshold=10'
  14. - 'use-service-account-credentials=true'
  15. kubelet-arg:
  16. - 'streaming-connection-idle-timeout=5m'
  17. - 'make-iptables-util-chains=true'
  1. Create the /var/lib/rancher/k3s/server/psa.yaml file with the following contents. You may want to exempt more namespaces as well. The below example exempts kube-system (required), cis-operator-system (optional, but useful for when running security scans through Rancher), and system-upgrade (required if doing Automated Upgrades).
  1. apiVersion: apiserver.config.k8s.io/v1
  2. kind: AdmissionConfiguration
  3. plugins:
  4. - name: PodSecurity
  5. configuration:
  6. apiVersion: pod-security.admission.config.k8s.io/v1beta1
  7. kind: PodSecurityConfiguration
  8. defaults:
  9. enforce: "restricted"
  10. enforce-version: "latest"
  11. audit: "restricted"
  12. audit-version: "latest"
  13. warn: "restricted"
  14. warn-version: "latest"
  15. exemptions:
  16. usernames: []
  17. runtimeClasses: []
  18. namespaces: [kube-system, cis-operator-system, system-upgrade]
  1. Perform the upgrade as normal. If doing Automated Upgrades, ensure that the namespace where the system-upgrade-controller pod is running in is setup to be privileged in accordance with the Pod Security levels:
  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: system-upgrade
  5. labels:
  6. # This value must be privileged for the controller to run successfully.
  7. pod-security.kubernetes.io/enforce: privileged
  8. pod-security.kubernetes.io/enforce-version: v1.25
  9. # We are setting these to our _desired_ `enforce` level, but note that these below values can be any of the available options.
  10. pod-security.kubernetes.io/audit: privileged
  11. pod-security.kubernetes.io/audit-version: v1.25
  12. pod-security.kubernetes.io/warn: privileged
  13. pod-security.kubernetes.io/warn-version: v1.25
  1. After the upgrade is complete, remove any remaining PSP resources from the cluster. In many cases, there may be PodSecurityPolicies and associated RBAC resources in custom files used for hardening within /var/lib/rancher/k3s/server/manifests/. Remove those resources and k3s will update automatically. Sometimes, due to timing, some of these may be left in the cluster, in which case you will need to delete them manually. If the Hardening Guide was previously followed, you should be able to delete them via the following:
  1. # Get the resources associated with PSPs
  2. $ kubectl get roles,clusterroles,rolebindings,clusterrolebindings -A | grep -i psp
  3. # Delete those resources:
  4. $ kubectl delete clusterrole.rbac.authorization.k8s.io/psp:restricted-psp clusterrole.rbac.authorization.k8s.io/psp:svclb-psp clusterrole.rbac.authorization.k8s.io/psp:system-unrestricted-psp clusterrolebinding.rbac.authorization.k8s.io/default:restricted-psp clusterrolebinding.rbac.authorization.k8s.io/system-unrestricted-node-psp-rolebinding && kubectl delete -n kube-system rolebinding.rbac.authorization.k8s.io/svclb-psp-rolebinding rolebinding.rbac.authorization.k8s.io/system-unrestricted-svc-acct-psp-rolebinding