Hardening Guide with CIS 1.5 Benchmark


This document provides prescriptive guidance for hardening a production installation of a RKE cluster to be used with Rancher v2.5. It outlines the configurations and controls required to address Kubernetes benchmark controls from the Center for Information Security (CIS).

This hardening guide describes how to secure the nodes in your cluster, and it is recommended to follow this guide before installing Kubernetes.

This hardening guide is intended to be used for RKE clusters and associated with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:

Rancher VersionCIS Benchmark VersionKubernetes Version
Rancher v2.5Benchmark v1.5Kubernetes 1.15

Click here to download a PDF version of this document

Overview

This document provides prescriptive guidance for hardening a RKE cluster to be used for installing Rancher v2.5 with Kubernetes v1.15 or provisioning a RKE cluster with Kubernetes 1.15 to be used within Rancher v2.5. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS).

For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the CIS 1.5 Benchmark - Self-Assessment Guide - Rancher v2.5.

Known Issues

  • Rancher exec shell and view logs for pods are not functional in a CIS 1.5 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes.
  • When setting the default_pod_security_policy_template_id: to restricted Rancher creates RoleBindings and ClusterRoleBindings on the default service accounts. The CIS 1.5 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

Configure Kernel Runtime Parameters

The following sysctl configuration is recommended for all nodes type in the cluster. Set the following parameters in /etc/sysctl.d/90-kubelet.conf:

  1. vm.overcommit_memory=1
  2. vm.panic_on_oom=0
  3. kernel.panic=10
  4. kernel.panic_on_oops=1
  5. kernel.keys.root_maxbytes=25000000

Run sysctl -p /etc/sysctl.d/90-kubelet.conf to enable the settings.

Configure etcd user and group

A user account and group for the etcd service is required to be setup before installing RKE. The uid and gid for the etcd user will be used in the RKE config.yml to set the proper permissions for files and directories during installation time.

create etcd user and group

To create the etcd group run the following console commands.

The commands below use 52034 for uid and gid are for example purposes. Any valid unused uid or gid could also be used in lieu of 52034.

  1. groupadd --gid 52034 etcd
  2. useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd

Update the RKE config.yml with the uid and gid of the etcd user:

  1. services:
  2. etcd:
  3. gid: 52034
  4. uid: 52034

Set automountServiceAccountToken to false for default service accounts

Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

For each namespace including default and kube-system on a standard RKE install the default service account must include this value:

  1. automountServiceAccountToken: false

Save the following yaml to a file called account_update.yaml

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: default
  5. automountServiceAccountToken: false

Create a bash script file called account_update.sh. Be sure to chmod +x account_update.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
  3. kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
  4. done

Ensure that all Namespaces have Network Policies defined

Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. A network policy is a specification of how selections of pods are allowed to communicate with each other and other network endpoints.

Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled. This guide uses canal to provide the policy enforcement. Additional information about CNI providers can be found here

Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a permissive example is provide below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following yaml as default-allow-all.yaml. Additional documentation about network policies can be found on the Kubernetes site.

This NetworkPolicy is not recommended for production use

  1. ---
  2. apiVersion: networking.k8s.io/v1
  3. kind: NetworkPolicy
  4. metadata:
  5. name: default-allow-all
  6. spec:
  7. podSelector: {}
  8. ingress:
  9. - {}
  10. egress:
  11. - {}
  12. policyTypes:
  13. - Ingress
  14. - Egress

Create a bash script file called apply_networkPolicy_to_all_ns.sh. Be sure to chmod +x apply_networkPolicy_to_all_ns.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
  3. kubectl apply -f default-allow-all.yaml -n ${namespace}
  4. done

Execute this script to apply the default-allow-all.yaml the permissive NetworkPolicy to all namespaces.

Reference Hardened RKE cluster.yml configuration

The reference cluster.yml is used by the RKE CLI that provides the configuration needed to achieve a hardened install of Rancher Kubernetes Engine (RKE). Install documentation is provided with additional details about the configuration items. This reference cluster.yml does not include the required nodes directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes

  1. # If you intend to deploy Kubernetes in an air-gapped environment,
  2. # please consult the documentation on how to configure custom RKE images.
  3. kubernetes_version: "v1.15.9-rancher1-1"
  4. enable_network_policy: true
  5. default_pod_security_policy_template_id: "restricted"
  6. # the nodes directive is required and will vary depending on your environment
  7. # documentation for node configuration can be found here:
  8. # https://rancher.com/docs/rke/latest/en/config-options/nodes
  9. nodes:
  10. services:
  11. etcd:
  12. uid: 52034
  13. gid: 52034
  14. kube-api:
  15. pod_security_policy: true
  16. secrets_encryption_config:
  17. enabled: true
  18. audit_log:
  19. enabled: true
  20. admission_configuration:
  21. event_rate_limit:
  22. enabled: true
  23. kube-controller:
  24. extra_args:
  25. feature-gates: "RotateKubeletServerCertificate=true"
  26. scheduler:
  27. image: ""
  28. extra_args: {}
  29. extra_binds: []
  30. extra_env: []
  31. kubelet:
  32. generate_serving_certificate: true
  33. extra_args:
  34. feature-gates: "RotateKubeletServerCertificate=true"
  35. protect-kernel-defaults: "true"
  36. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  37. extra_binds: []
  38. extra_env: []
  39. cluster_domain: ""
  40. infra_container_image: ""
  41. cluster_dns_server: ""
  42. fail_swap_on: false
  43. kubeproxy:
  44. image: ""
  45. extra_args: {}
  46. extra_binds: []
  47. extra_env: []
  48. network:
  49. plugin: ""
  50. options: {}
  51. mtu: 0
  52. node_selector: {}
  53. authentication:
  54. strategy: ""
  55. sans: []
  56. webhook: null
  57. addons: |
  58. ---
  59. apiVersion: v1
  60. kind: Namespace
  61. metadata:
  62. name: ingress-nginx
  63. ---
  64. apiVersion: rbac.authorization.k8s.io/v1
  65. kind: Role
  66. metadata:
  67. name: default-psp-role
  68. namespace: ingress-nginx
  69. rules:
  70. - apiGroups:
  71. - extensions
  72. resourceNames:
  73. - default-psp
  74. resources:
  75. - podsecuritypolicies
  76. verbs:
  77. - use
  78. ---
  79. apiVersion: rbac.authorization.k8s.io/v1
  80. kind: RoleBinding
  81. metadata:
  82. name: default-psp-rolebinding
  83. namespace: ingress-nginx
  84. roleRef:
  85. apiGroup: rbac.authorization.k8s.io
  86. kind: Role
  87. name: default-psp-role
  88. subjects:
  89. - apiGroup: rbac.authorization.k8s.io
  90. kind: Group
  91. name: system:serviceaccounts
  92. - apiGroup: rbac.authorization.k8s.io
  93. kind: Group
  94. name: system:authenticated
  95. ---
  96. apiVersion: v1
  97. kind: Namespace
  98. metadata:
  99. name: cattle-system
  100. ---
  101. apiVersion: rbac.authorization.k8s.io/v1
  102. kind: Role
  103. metadata:
  104. name: default-psp-role
  105. namespace: cattle-system
  106. rules:
  107. - apiGroups:
  108. - extensions
  109. resourceNames:
  110. - default-psp
  111. resources:
  112. - podsecuritypolicies
  113. verbs:
  114. - use
  115. ---
  116. apiVersion: rbac.authorization.k8s.io/v1
  117. kind: RoleBinding
  118. metadata:
  119. name: default-psp-rolebinding
  120. namespace: cattle-system
  121. roleRef:
  122. apiGroup: rbac.authorization.k8s.io
  123. kind: Role
  124. name: default-psp-role
  125. subjects:
  126. - apiGroup: rbac.authorization.k8s.io
  127. kind: Group
  128. name: system:serviceaccounts
  129. - apiGroup: rbac.authorization.k8s.io
  130. kind: Group
  131. name: system:authenticated
  132. ---
  133. apiVersion: policy/v1beta1
  134. kind: PodSecurityPolicy
  135. metadata:
  136. name: restricted
  137. spec:
  138. requiredDropCapabilities:
  139. - NET_RAW
  140. privileged: false
  141. allowPrivilegeEscalation: false
  142. defaultAllowPrivilegeEscalation: false
  143. fsGroup:
  144. rule: RunAsAny
  145. runAsUser:
  146. rule: MustRunAsNonRoot
  147. seLinux:
  148. rule: RunAsAny
  149. supplementalGroups:
  150. rule: RunAsAny
  151. volumes:
  152. - emptyDir
  153. - secret
  154. - persistentVolumeClaim
  155. - downwardAPI
  156. - configMap
  157. - projected
  158. ---
  159. apiVersion: rbac.authorization.k8s.io/v1
  160. kind: ClusterRole
  161. metadata:
  162. name: psp:restricted
  163. rules:
  164. - apiGroups:
  165. - extensions
  166. resourceNames:
  167. - restricted
  168. resources:
  169. - podsecuritypolicies
  170. verbs:
  171. - use
  172. ---
  173. apiVersion: rbac.authorization.k8s.io/v1
  174. kind: ClusterRoleBinding
  175. metadata:
  176. name: psp:restricted
  177. roleRef:
  178. apiGroup: rbac.authorization.k8s.io
  179. kind: ClusterRole
  180. name: psp:restricted
  181. subjects:
  182. - apiGroup: rbac.authorization.k8s.io
  183. kind: Group
  184. name: system:serviceaccounts
  185. - apiGroup: rbac.authorization.k8s.io
  186. kind: Group
  187. name: system:authenticated
  188. ---
  189. apiVersion: v1
  190. kind: ServiceAccount
  191. metadata:
  192. name: tiller
  193. namespace: kube-system
  194. ---
  195. apiVersion: rbac.authorization.k8s.io/v1
  196. kind: ClusterRoleBinding
  197. metadata:
  198. name: tiller
  199. roleRef:
  200. apiGroup: rbac.authorization.k8s.io
  201. kind: ClusterRole
  202. name: cluster-admin
  203. subjects:
  204. - kind: ServiceAccount
  205. name: tiller
  206. namespace: kube-system
  207. addons_include: []
  208. system_images:
  209. etcd: ""
  210. alpine: ""
  211. nginx_proxy: ""
  212. cert_downloader: ""
  213. kubernetes_services_sidecar: ""
  214. kubedns: ""
  215. dnsmasq: ""
  216. kubedns_sidecar: ""
  217. kubedns_autoscaler: ""
  218. coredns: ""
  219. coredns_autoscaler: ""
  220. kubernetes: ""
  221. flannel: ""
  222. flannel_cni: ""
  223. calico_node: ""
  224. calico_cni: ""
  225. calico_controllers: ""
  226. calico_ctl: ""
  227. calico_flexvol: ""
  228. canal_node: ""
  229. canal_cni: ""
  230. canal_flannel: ""
  231. canal_flexvol: ""
  232. weave_node: ""
  233. weave_cni: ""
  234. pod_infra_container: ""
  235. ingress: ""
  236. ingress_backend: ""
  237. metrics_server: ""
  238. windows_pod_infra_container: ""
  239. ssh_key_path: ""
  240. ssh_cert_path: ""
  241. ssh_agent_auth: false
  242. authorization:
  243. mode: ""
  244. options: {}
  245. ignore_docker_version: false
  246. private_registries: []
  247. ingress:
  248. provider: ""
  249. options: {}
  250. node_selector: {}
  251. extra_args: {}
  252. dns_policy: ""
  253. extra_envs: []
  254. extra_volumes: []
  255. extra_volume_mounts: []
  256. cluster_name: ""
  257. prefix_path: ""
  258. addon_job_timeout: 0
  259. bastion_host:
  260. address: ""
  261. port: ""
  262. user: ""
  263. ssh_key: ""
  264. ssh_key_path: ""
  265. ssh_cert: ""
  266. ssh_cert_path: ""
  267. monitoring:
  268. provider: ""
  269. options: {}
  270. node_selector: {}
  271. restore:
  272. restore: false
  273. snapshot_name: ""
  274. dns: null

Reference Hardened RKE Template configuration

The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes. RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher documentaion for additional installation and RKE Template details.

  1. #
  2. # Cluster Config
  3. #
  4. default_pod_security_policy_template_id: restricted
  5. docker_root_dir: /var/lib/docker
  6. enable_cluster_alerting: false
  7. enable_cluster_monitoring: false
  8. enable_network_policy: true
  9. #
  10. # Rancher Config
  11. #
  12. rancher_kubernetes_engine_config:
  13. addon_job_timeout: 30
  14. addons: |-
  15. ---
  16. apiVersion: v1
  17. kind: Namespace
  18. metadata:
  19. name: ingress-nginx
  20. ---
  21. apiVersion: rbac.authorization.k8s.io/v1
  22. kind: Role
  23. metadata:
  24. name: default-psp-role
  25. namespace: ingress-nginx
  26. rules:
  27. - apiGroups:
  28. - extensions
  29. resourceNames:
  30. - default-psp
  31. resources:
  32. - podsecuritypolicies
  33. verbs:
  34. - use
  35. ---
  36. apiVersion: rbac.authorization.k8s.io/v1
  37. kind: RoleBinding
  38. metadata:
  39. name: default-psp-rolebinding
  40. namespace: ingress-nginx
  41. roleRef:
  42. apiGroup: rbac.authorization.k8s.io
  43. kind: Role
  44. name: default-psp-role
  45. subjects:
  46. - apiGroup: rbac.authorization.k8s.io
  47. kind: Group
  48. name: system:serviceaccounts
  49. - apiGroup: rbac.authorization.k8s.io
  50. kind: Group
  51. name: system:authenticated
  52. ---
  53. apiVersion: v1
  54. kind: Namespace
  55. metadata:
  56. name: cattle-system
  57. ---
  58. apiVersion: rbac.authorization.k8s.io/v1
  59. kind: Role
  60. metadata:
  61. name: default-psp-role
  62. namespace: cattle-system
  63. rules:
  64. - apiGroups:
  65. - extensions
  66. resourceNames:
  67. - default-psp
  68. resources:
  69. - podsecuritypolicies
  70. verbs:
  71. - use
  72. ---
  73. apiVersion: rbac.authorization.k8s.io/v1
  74. kind: RoleBinding
  75. metadata:
  76. name: default-psp-rolebinding
  77. namespace: cattle-system
  78. roleRef:
  79. apiGroup: rbac.authorization.k8s.io
  80. kind: Role
  81. name: default-psp-role
  82. subjects:
  83. - apiGroup: rbac.authorization.k8s.io
  84. kind: Group
  85. name: system:serviceaccounts
  86. - apiGroup: rbac.authorization.k8s.io
  87. kind: Group
  88. name: system:authenticated
  89. ---
  90. apiVersion: policy/v1beta1
  91. kind: PodSecurityPolicy
  92. metadata:
  93. name: restricted
  94. spec:
  95. requiredDropCapabilities:
  96. - NET_RAW
  97. privileged: false
  98. allowPrivilegeEscalation: false
  99. defaultAllowPrivilegeEscalation: false
  100. fsGroup:
  101. rule: RunAsAny
  102. runAsUser:
  103. rule: MustRunAsNonRoot
  104. seLinux:
  105. rule: RunAsAny
  106. supplementalGroups:
  107. rule: RunAsAny
  108. volumes:
  109. - emptyDir
  110. - secret
  111. - persistentVolumeClaim
  112. - downwardAPI
  113. - configMap
  114. - projected
  115. ---
  116. apiVersion: rbac.authorization.k8s.io/v1
  117. kind: ClusterRole
  118. metadata:
  119. name: psp:restricted
  120. rules:
  121. - apiGroups:
  122. - extensions
  123. resourceNames:
  124. - restricted
  125. resources:
  126. - podsecuritypolicies
  127. verbs:
  128. - use
  129. ---
  130. apiVersion: rbac.authorization.k8s.io/v1
  131. kind: ClusterRoleBinding
  132. metadata:
  133. name: psp:restricted
  134. roleRef:
  135. apiGroup: rbac.authorization.k8s.io
  136. kind: ClusterRole
  137. name: psp:restricted
  138. subjects:
  139. - apiGroup: rbac.authorization.k8s.io
  140. kind: Group
  141. name: system:serviceaccounts
  142. - apiGroup: rbac.authorization.k8s.io
  143. kind: Group
  144. name: system:authenticated
  145. ---
  146. apiVersion: v1
  147. kind: ServiceAccount
  148. metadata:
  149. name: tiller
  150. namespace: kube-system
  151. ---
  152. apiVersion: rbac.authorization.k8s.io/v1
  153. kind: ClusterRoleBinding
  154. metadata:
  155. name: tiller
  156. roleRef:
  157. apiGroup: rbac.authorization.k8s.io
  158. kind: ClusterRole
  159. name: cluster-admin
  160. subjects:
  161. - kind: ServiceAccount
  162. name: tiller
  163. namespace: kube-system
  164. ignore_docker_version: true
  165. kubernetes_version: v1.15.9-rancher1-1
  166. #
  167. # If you are using calico on AWS
  168. #
  169. # network:
  170. # plugin: calico
  171. # calico_network_provider:
  172. # cloud_provider: aws
  173. #
  174. # # To specify flannel interface
  175. #
  176. # network:
  177. # plugin: flannel
  178. # flannel_network_provider:
  179. # iface: eth1
  180. #
  181. # # To specify flannel interface for canal plugin
  182. #
  183. # network:
  184. # plugin: canal
  185. # canal_network_provider:
  186. # iface: eth1
  187. #
  188. network:
  189. mtu: 0
  190. plugin: canal
  191. #
  192. # services:
  193. # kube-api:
  194. # service_cluster_ip_range: 10.43.0.0/16
  195. # kube-controller:
  196. # cluster_cidr: 10.42.0.0/16
  197. # service_cluster_ip_range: 10.43.0.0/16
  198. # kubelet:
  199. # cluster_domain: cluster.local
  200. # cluster_dns_server: 10.43.0.10
  201. #
  202. services:
  203. etcd:
  204. backup_config:
  205. enabled: false
  206. interval_hours: 12
  207. retention: 6
  208. safe_timestamp: false
  209. creation: 12h
  210. extra_args:
  211. election-timeout: '5000'
  212. heartbeat-interval: '500'
  213. gid: 52034
  214. retention: 72h
  215. snapshot: false
  216. uid: 52034
  217. kube_api:
  218. always_pull_images: false
  219. audit_log:
  220. enabled: true
  221. event_rate_limit:
  222. enabled: true
  223. pod_security_policy: true
  224. secrets_encryption_config:
  225. enabled: true
  226. service_node_port_range: 30000-32767
  227. kube_controller:
  228. extra_args:
  229. address: 127.0.0.1
  230. feature-gates: RotateKubeletServerCertificate=true
  231. profiling: 'false'
  232. terminated-pod-gc-threshold: '1000'
  233. kubelet:
  234. extra_args:
  235. anonymous-auth: 'false'
  236. event-qps: '0'
  237. feature-gates: RotateKubeletServerCertificate=true
  238. make-iptables-util-chains: 'true'
  239. protect-kernel-defaults: 'true'
  240. streaming-connection-idle-timeout: 1800s
  241. tls-cipher-suites: >-
  242. TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  243. fail_swap_on: false
  244. generate_serving_certificate: true
  245. scheduler:
  246. extra_args:
  247. address: 127.0.0.1
  248. profiling: 'false'
  249. ssh_agent_auth: false
  250. windows_prefered_cluster: false

Hardened Reference Ubuntu 18.04 LTS cloud-config:

The reference cloud-config is generally used in cloud infrastructure environments to allow for configuration management of compute instances. The reference config configures Ubuntu operating system level settings needed before installing kubernetes.

  1. #cloud-config
  2. packages:
  3. - curl
  4. - jq
  5. runcmd:
  6. - sysctl -w vm.overcommit_memory=1
  7. - sysctl -w kernel.panic=10
  8. - sysctl -w kernel.panic_on_oops=1
  9. - curl https://releases.rancher.com/install-docker/18.09.sh | sh
  10. - usermod -aG docker ubuntu
  11. - return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done
  12. - addgroup --gid 52034 etcd
  13. - useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd
  14. write_files:
  15. - path: /etc/sysctl.d/kubelet.conf
  16. owner: root:root
  17. permissions: "0644"
  18. content: |
  19. vm.overcommit_memory=1
  20. kernel.panic=10
  21. kernel.panic_on_oops=1