This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.5. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS).

This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes.

This hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:

Hardening Guide VersionRancher VersionCIS Benchmark VersionKubernetes Version
Hardening Guide v2.3.5Rancher v2.3.5Benchmark v1.5Kubernetes 1.15

Click here to download a PDF version of this document

Overview

This document provides prescriptive guidance for hardening a production installation of Rancher v2.3.5 with Kubernetes v1.15. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS).

For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.3.5.

Known Issues

  • Rancher exec shell and view logs for pods are not functional in a CIS 1.5 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes.
  • When setting the default_pod_security_policy_template_id: to restricted Rancher creates RoleBindings and ClusterRoleBindings on the default service accounts. The CIS 1.5 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

Configure Kernel Runtime Parameters

The following sysctl configuration is recommended for all nodes type in the cluster. Set the following parameters in /etc/sysctl.d/90-kubelet.conf:

  1. vm.overcommit_memory=1
  2. vm.panic_on_oom=0
  3. kernel.panic=10
  4. kernel.panic_on_oops=1
  5. kernel.keys.root_maxbytes=25000000

Run sysctl -p /etc/sysctl.d/90-kubelet.conf to enable the settings.

Configure etcd user and group

A user account and group for the etcd service is required to be setup before installing RKE. The uid and gid for the etcd user will be used in the RKE config.yml to set the proper permissions for files and directories during installation time.

create etcd user and group

To create the etcd group run the following console commands.

  1. groupadd --gid 52034 etcd
  2. useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd

Update the RKE config.yml with the uid and gid of the etcd user:

  1. services:
  2. etcd:
  3. gid: 52034
  4. uid: 52034

Set automountServiceAccountToken to false for default service accounts

Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

For each namespace the default service account must include this value:

  1. automountServiceAccountToken: false

Save the following yaml to a file called account_update.yaml

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: default
  5. automountServiceAccountToken: false

Create a bash script file called account_update.sh. Be sure to chmod +x account_update.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
  3. kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
  4. done

Ensure that all Namespaces have Network Policies defined

Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints.

Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled. This guide uses canal to provide the policy enforcement. Additional information about CNI providers can be found here

Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a permissive example is provide below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following yaml as default-allow-all.yaml. Additional documentation about network policies can be found on the Kubernetes site.

This NetworkPolicy is not recommended for production use

  1. ---
  2. apiVersion: networking.k8s.io/v1
  3. kind: NetworkPolicy
  4. metadata:
  5. name: default-allow-all
  6. spec:
  7. podSelector: {}
  8. ingress:
  9. - {}
  10. egress:
  11. - {}
  12. policyTypes:
  13. - Ingress
  14. - Egress

Create a bash script file called apply_networkPolicy_to_all_ns.sh. Be sure to chmod +x apply_networkPolicy_to_all_ns.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
  3. kubectl apply -f default-allow-all.yaml -n ${namespace}
  4. done

Execute this script to apply the default-allow-all.yaml the permissive NetworkPolicy to all namespaces.

Reference Hardened RKE cluster.yml configuration

The reference cluster.yml is used by the RKE CLI that provides the configuration needed to achieve a hardened install of Rancher Kubernetes Engine (RKE). Install documentation is provided with additional details about the configuration items.

  1. # If you intend to deploy Kubernetes in an air-gapped environment,
  2. # please consult the documentation on how to configure custom RKE images.
  3. kubernetes_version: "v1.15.9-rancher1-1"
  4. enable_network_policy: true
  5. default_pod_security_policy_template_id: "restricted"
  6. services:
  7. etcd:
  8. uid: 52034
  9. gid: 52034
  10. kube-api:
  11. pod_security_policy: true
  12. secrets_encryption_config:
  13. enabled: true
  14. audit_log:
  15. enabled: true
  16. admission_configuration:
  17. event_rate_limit:
  18. enabled: true
  19. kube-controller:
  20. extra_args:
  21. feature-gates: "RotateKubeletServerCertificate=true"
  22. scheduler:
  23. image: ""
  24. extra_args: {}
  25. extra_binds: []
  26. extra_env: []
  27. kubelet:
  28. generate_serving_certificate: true
  29. extra_args:
  30. feature-gates: "RotateKubeletServerCertificate=true"
  31. protect-kernel-defaults: "true"
  32. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  33. extra_binds: []
  34. extra_env: []
  35. cluster_domain: ""
  36. infra_container_image: ""
  37. cluster_dns_server: ""
  38. fail_swap_on: false
  39. kubeproxy:
  40. image: ""
  41. extra_args: {}
  42. extra_binds: []
  43. extra_env: []
  44. network:
  45. plugin: ""
  46. options: {}
  47. mtu: 0
  48. node_selector: {}
  49. authentication:
  50. strategy: ""
  51. sans: []
  52. webhook: null
  53. addons: |
  54. ---
  55. apiVersion: v1
  56. kind: Namespace
  57. metadata:
  58. name: ingress-nginx
  59. ---
  60. apiVersion: rbac.authorization.k8s.io/v1
  61. kind: Role
  62. metadata:
  63. name: default-psp-role
  64. namespace: ingress-nginx
  65. rules:
  66. - apiGroups:
  67. - extensions
  68. resourceNames:
  69. - default-psp
  70. resources:
  71. - podsecuritypolicies
  72. verbs:
  73. - use
  74. ---
  75. apiVersion: rbac.authorization.k8s.io/v1
  76. kind: RoleBinding
  77. metadata:
  78. name: default-psp-rolebinding
  79. namespace: ingress-nginx
  80. roleRef:
  81. apiGroup: rbac.authorization.k8s.io
  82. kind: Role
  83. name: default-psp-role
  84. subjects:
  85. - apiGroup: rbac.authorization.k8s.io
  86. kind: Group
  87. name: system:serviceaccounts
  88. - apiGroup: rbac.authorization.k8s.io
  89. kind: Group
  90. name: system:authenticated
  91. ---
  92. apiVersion: v1
  93. kind: Namespace
  94. metadata:
  95. name: cattle-system
  96. ---
  97. apiVersion: rbac.authorization.k8s.io/v1
  98. kind: Role
  99. metadata:
  100. name: default-psp-role
  101. namespace: cattle-system
  102. rules:
  103. - apiGroups:
  104. - extensions
  105. resourceNames:
  106. - default-psp
  107. resources:
  108. - podsecuritypolicies
  109. verbs:
  110. - use
  111. ---
  112. apiVersion: rbac.authorization.k8s.io/v1
  113. kind: RoleBinding
  114. metadata:
  115. name: default-psp-rolebinding
  116. namespace: cattle-system
  117. roleRef:
  118. apiGroup: rbac.authorization.k8s.io
  119. kind: Role
  120. name: default-psp-role
  121. subjects:
  122. - apiGroup: rbac.authorization.k8s.io
  123. kind: Group
  124. name: system:serviceaccounts
  125. - apiGroup: rbac.authorization.k8s.io
  126. kind: Group
  127. name: system:authenticated
  128. ---
  129. apiVersion: policy/v1beta1
  130. kind: PodSecurityPolicy
  131. metadata:
  132. name: restricted
  133. spec:
  134. requiredDropCapabilities:
  135. - NET_RAW
  136. privileged: false
  137. allowPrivilegeEscalation: false
  138. defaultAllowPrivilegeEscalation: false
  139. fsGroup:
  140. rule: RunAsAny
  141. runAsUser:
  142. rule: MustRunAsNonRoot
  143. seLinux:
  144. rule: RunAsAny
  145. supplementalGroups:
  146. rule: RunAsAny
  147. volumes:
  148. - emptyDir
  149. - secret
  150. - persistentVolumeClaim
  151. - downwardAPI
  152. - configMap
  153. - projected
  154. ---
  155. apiVersion: rbac.authorization.k8s.io/v1
  156. kind: ClusterRole
  157. metadata:
  158. name: psp:restricted
  159. rules:
  160. - apiGroups:
  161. - extensions
  162. resourceNames:
  163. - restricted
  164. resources:
  165. - podsecuritypolicies
  166. verbs:
  167. - use
  168. ---
  169. apiVersion: rbac.authorization.k8s.io/v1
  170. kind: ClusterRoleBinding
  171. metadata:
  172. name: psp:restricted
  173. roleRef:
  174. apiGroup: rbac.authorization.k8s.io
  175. kind: ClusterRole
  176. name: psp:restricted
  177. subjects:
  178. - apiGroup: rbac.authorization.k8s.io
  179. kind: Group
  180. name: system:serviceaccounts
  181. - apiGroup: rbac.authorization.k8s.io
  182. kind: Group
  183. name: system:authenticated
  184. ---
  185. apiVersion: v1
  186. kind: ServiceAccount
  187. metadata:
  188. name: tiller
  189. namespace: kube-system
  190. ---
  191. apiVersion: rbac.authorization.k8s.io/v1
  192. kind: ClusterRoleBinding
  193. metadata:
  194. name: tiller
  195. roleRef:
  196. apiGroup: rbac.authorization.k8s.io
  197. kind: ClusterRole
  198. name: cluster-admin
  199. subjects:
  200. - kind: ServiceAccount
  201. name: tiller
  202. namespace: kube-system
  203. addons_include: []
  204. system_images:
  205. etcd: ""
  206. alpine: ""
  207. nginx_proxy: ""
  208. cert_downloader: ""
  209. kubernetes_services_sidecar: ""
  210. kubedns: ""
  211. dnsmasq: ""
  212. kubedns_sidecar: ""
  213. kubedns_autoscaler: ""
  214. coredns: ""
  215. coredns_autoscaler: ""
  216. kubernetes: ""
  217. flannel: ""
  218. flannel_cni: ""
  219. calico_node: ""
  220. calico_cni: ""
  221. calico_controllers: ""
  222. calico_ctl: ""
  223. calico_flexvol: ""
  224. canal_node: ""
  225. canal_cni: ""
  226. canal_flannel: ""
  227. canal_flexvol: ""
  228. weave_node: ""
  229. weave_cni: ""
  230. pod_infra_container: ""
  231. ingress: ""
  232. ingress_backend: ""
  233. metrics_server: ""
  234. windows_pod_infra_container: ""
  235. ssh_key_path: ""
  236. ssh_cert_path: ""
  237. ssh_agent_auth: false
  238. authorization:
  239. mode: ""
  240. options: {}
  241. ignore_docker_version: false
  242. private_registries: []
  243. ingress:
  244. provider: ""
  245. options: {}
  246. node_selector: {}
  247. extra_args: {}
  248. dns_policy: ""
  249. extra_envs: []
  250. extra_volumes: []
  251. extra_volume_mounts: []
  252. cluster_name: ""
  253. prefix_path: ""
  254. addon_job_timeout: 0
  255. bastion_host:
  256. address: ""
  257. port: ""
  258. user: ""
  259. ssh_key: ""
  260. ssh_key_path: ""
  261. ssh_cert: ""
  262. ssh_cert_path: ""
  263. monitoring:
  264. provider: ""
  265. options: {}
  266. node_selector: {}
  267. restore:
  268. restore: false
  269. snapshot_name: ""
  270. dns: null

Reference Hardened RKE Template configuration

The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes. RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher documentaion for additional installation and RKE Template details.

  1. #
  2. # Cluster Config
  3. #
  4. default_pod_security_policy_template_id: restricted
  5. docker_root_dir: /var/lib/docker
  6. enable_cluster_alerting: false
  7. enable_cluster_monitoring: false
  8. enable_network_policy: true
  9. #
  10. # Rancher Config
  11. #
  12. rancher_kubernetes_engine_config:
  13. addon_job_timeout: 30
  14. addons: |-
  15. ---
  16. apiVersion: v1
  17. kind: Namespace
  18. metadata:
  19. name: ingress-nginx
  20. ---
  21. apiVersion: rbac.authorization.k8s.io/v1
  22. kind: Role
  23. metadata:
  24. name: default-psp-role
  25. namespace: ingress-nginx
  26. rules:
  27. - apiGroups:
  28. - extensions
  29. resourceNames:
  30. - default-psp
  31. resources:
  32. - podsecuritypolicies
  33. verbs:
  34. - use
  35. ---
  36. apiVersion: rbac.authorization.k8s.io/v1
  37. kind: RoleBinding
  38. metadata:
  39. name: default-psp-rolebinding
  40. namespace: ingress-nginx
  41. roleRef:
  42. apiGroup: rbac.authorization.k8s.io
  43. kind: Role
  44. name: default-psp-role
  45. subjects:
  46. - apiGroup: rbac.authorization.k8s.io
  47. kind: Group
  48. name: system:serviceaccounts
  49. - apiGroup: rbac.authorization.k8s.io
  50. kind: Group
  51. name: system:authenticated
  52. ---
  53. apiVersion: v1
  54. kind: Namespace
  55. metadata:
  56. name: cattle-system
  57. ---
  58. apiVersion: rbac.authorization.k8s.io/v1
  59. kind: Role
  60. metadata:
  61. name: default-psp-role
  62. namespace: cattle-system
  63. rules:
  64. - apiGroups:
  65. - extensions
  66. resourceNames:
  67. - default-psp
  68. resources:
  69. - podsecuritypolicies
  70. verbs:
  71. - use
  72. ---
  73. apiVersion: rbac.authorization.k8s.io/v1
  74. kind: RoleBinding
  75. metadata:
  76. name: default-psp-rolebinding
  77. namespace: cattle-system
  78. roleRef:
  79. apiGroup: rbac.authorization.k8s.io
  80. kind: Role
  81. name: default-psp-role
  82. subjects:
  83. - apiGroup: rbac.authorization.k8s.io
  84. kind: Group
  85. name: system:serviceaccounts
  86. - apiGroup: rbac.authorization.k8s.io
  87. kind: Group
  88. name: system:authenticated
  89. ---
  90. apiVersion: policy/v1beta1
  91. kind: PodSecurityPolicy
  92. metadata:
  93. name: restricted
  94. spec:
  95. requiredDropCapabilities:
  96. - NET_RAW
  97. privileged: false
  98. allowPrivilegeEscalation: false
  99. defaultAllowPrivilegeEscalation: false
  100. fsGroup:
  101. rule: RunAsAny
  102. runAsUser:
  103. rule: MustRunAsNonRoot
  104. seLinux:
  105. rule: RunAsAny
  106. supplementalGroups:
  107. rule: RunAsAny
  108. volumes:
  109. - emptyDir
  110. - secret
  111. - persistentVolumeClaim
  112. - downwardAPI
  113. - configMap
  114. - projected
  115. ---
  116. apiVersion: rbac.authorization.k8s.io/v1
  117. kind: ClusterRole
  118. metadata:
  119. name: psp:restricted
  120. rules:
  121. - apiGroups:
  122. - extensions
  123. resourceNames:
  124. - restricted
  125. resources:
  126. - podsecuritypolicies
  127. verbs:
  128. - use
  129. ---
  130. apiVersion: rbac.authorization.k8s.io/v1
  131. kind: ClusterRoleBinding
  132. metadata:
  133. name: psp:restricted
  134. roleRef:
  135. apiGroup: rbac.authorization.k8s.io
  136. kind: ClusterRole
  137. name: psp:restricted
  138. subjects:
  139. - apiGroup: rbac.authorization.k8s.io
  140. kind: Group
  141. name: system:serviceaccounts
  142. - apiGroup: rbac.authorization.k8s.io
  143. kind: Group
  144. name: system:authenticated
  145. ---
  146. apiVersion: v1
  147. kind: ServiceAccount
  148. metadata:
  149. name: tiller
  150. namespace: kube-system
  151. ---
  152. apiVersion: rbac.authorization.k8s.io/v1
  153. kind: ClusterRoleBinding
  154. metadata:
  155. name: tiller
  156. roleRef:
  157. apiGroup: rbac.authorization.k8s.io
  158. kind: ClusterRole
  159. name: cluster-admin
  160. subjects:
  161. - kind: ServiceAccount
  162. name: tiller
  163. namespace: kube-system
  164. ignore_docker_version: true
  165. kubernetes_version: v1.15.9-rancher1-1
  166. #
  167. # If you are using calico on AWS
  168. #
  169. # network:
  170. # plugin: calico
  171. # calico_network_provider:
  172. # cloud_provider: aws
  173. #
  174. # # To specify flannel interface
  175. #
  176. # network:
  177. # plugin: flannel
  178. # flannel_network_provider:
  179. # iface: eth1
  180. #
  181. # # To specify flannel interface for canal plugin
  182. #
  183. # network:
  184. # plugin: canal
  185. # canal_network_provider:
  186. # iface: eth1
  187. #
  188. network:
  189. mtu: 0
  190. plugin: canal
  191. #
  192. # services:
  193. # kube-api:
  194. # service_cluster_ip_range: 10.43.0.0/16
  195. # kube-controller:
  196. # cluster_cidr: 10.42.0.0/16
  197. # service_cluster_ip_range: 10.43.0.0/16
  198. # kubelet:
  199. # cluster_domain: cluster.local
  200. # cluster_dns_server: 10.43.0.10
  201. #
  202. services:
  203. etcd:
  204. backup_config:
  205. enabled: false
  206. interval_hours: 12
  207. retention: 6
  208. safe_timestamp: false
  209. creation: 12h
  210. extra_args:
  211. election-timeout: '5000'
  212. heartbeat-interval: '500'
  213. gid: 52034
  214. retention: 72h
  215. snapshot: false
  216. uid: 52034
  217. kube_api:
  218. always_pull_images: false
  219. audit_log:
  220. enabled: true
  221. event_rate_limit:
  222. enabled: true
  223. pod_security_policy: true
  224. secrets_encryption_config:
  225. enabled: true
  226. service_node_port_range: 30000-32767
  227. kube_controller:
  228. extra_args:
  229. address: 127.0.0.1
  230. feature-gates: RotateKubeletServerCertificate=true
  231. profiling: 'false'
  232. terminated-pod-gc-threshold: '1000'
  233. kubelet:
  234. extra_args:
  235. anonymous-auth: 'false'
  236. event-qps: '0'
  237. feature-gates: RotateKubeletServerCertificate=true
  238. make-iptables-util-chains: 'true'
  239. protect-kernel-defaults: 'true'
  240. streaming-connection-idle-timeout: 1800s
  241. tls-cipher-suites: >-
  242. TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  243. fail_swap_on: false
  244. generate_serving_certificate: true
  245. scheduler:
  246. extra_args:
  247. address: 127.0.0.1
  248. profiling: 'false'
  249. ssh_agent_auth: false
  250. windows_prefered_cluster: false

Hardened Reference Ubuntu 18.04 LTS cloud-config:

The reference cloud-config is generally used in cloud infrastructure environments to allow for configuration management of compute instances. The reference config configures Ubuntu operating system level settings needed before installing kubernetes.

  1. #cloud-config
  2. packages:
  3. - curl
  4. - jq
  5. runcmd:
  6. - sysctl -w vm.overcommit_memory=1
  7. - sysctl -w kernel.panic=10
  8. - sysctl -w kernel.panic_on_oops=1
  9. - curl https://releases.rancher.com/install-docker/18.09.sh | sh
  10. - usermod -aG docker ubuntu
  11. - return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done
  12. - addgroup --gid 52034 etcd
  13. - useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd
  14. write_files:
  15. - path: /etc/sysctl.d/kubelet.conf
  16. owner: root:root
  17. permissions: "0644"
  18. content: |
  19. vm.overcommit_memory=1
  20. kernel.panic=10
  21. kernel.panic_on_oops=1