本文讲解了如何使您的集群符合互联网安全中心发布的 Kubernetes 安全基准,保护集群中节点的安全。安装 Kubernetes 之前,请按照本指南进行操作。

加固指南旨在与特定版本的 CIS Kubernetes Benchmark,Kubernetes 和 Rancher 一起使用:

加固指南版本Rancher 版本CIS Benchmark 版本Kubernetes 版本
加固指南 v2.3.3Rancher v2.3.3Benchmark v1.4.1Kubernetes 1.14,1.15 和 1.16

点击这里下载 PDF 版本的加固指南

下面的安全加固指南是针对在生产环境的 Rancher v2.3.3 中使用 Kubernetes 1.14,1.15 和 1.16 版本的集群。它概述了如何满足互联网安全中心(CIS)提出的 Kubernetes 安全标准。

有关如果根据官方 CIS 基准评估集群的更多详细信息,请参阅CIS Benchmark Rancher 自测指南 - Rancher v2.3.3

Profile Definitions

The following profile definitions agree with the CIS benchmarks for Kubernetes.

A profile is a set of configurations that provide a certain amount of hardening. Generally, the more hardened an environment is, the more it affects performance.

Level 1

Items in this profile intend to:

  • offer practical advice appropriate for the environment;
  • deliver an obvious security benefit; and
  • not alter the functionality or utility of the environment beyond an acceptable margin

Level 2

Items in this profile extend the “Level 1” profile and exhibit one or more of the following characteristics:

  • are intended for use in environments or use cases where security is paramount
  • act as a defense in depth measure
  • may negatively impact the utility or performance of the technology

1.1 - Rancher RKE Kubernetes cluster host configuration

(See Appendix A. for full ubuntu cloud-config example)

1.1.1 - Configure default sysctl settings on all hosts

Profile Applicability

  • Level 1

Description

Configure sysctl settings to match what the kubelet would set if allowed.

Rationale

We recommend that users launch the kubelet with the --protect-kernel-defaults option. The settings that the kubelet initially attempts to change can be set manually.

This supports the following control:

  • 2.1.7 - Ensure that the --protect-kernel-defaults argument is set to true (Scored)

Audit

  • Verify vm.overcommit_memory = 1
  1. sysctl vm.overcommit_memory
  • Verify vm.panic_on_oom = 0
  1. sysctl vm.panic_on_oom
  • Verify kernel.panic = 10
  1. sysctl kernel.panic
  • Verify kernel.panic_on_oops = 1
  1. sysctl kernel.panic_on_oops
  • Verify kernel.keys.root_maxkeys = 1000000
  1. sysctl kernel.keys.root_maxkeys
  • Verify kernel.keys.root_maxbytes = 25000000
  1. sysctl kernel.keys.root_maxbytes

Remediation

  • Set the following parameters in /etc/sysctl.d/90-kubelet.conf on all nodes:
  1. vm.overcommit_memory=1
  2. vm.panic_on_oom=0
  3. kernel.panic=10
  4. kernel.panic_on_oops=1
  5. kernel.keys.root_maxkeys=1000000
  6. kernel.keys.root_maxbytes=25000000
  • Run sysctl -p /etc/sysctl.d/90-kubelet.conf to enable the settings.

1.4.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive

Profile Applicability

  • Level 1

Description

Ensure that the etcd data directory has permissions of 700 or more restrictive.

Rationale

etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should not be readable or writable by any group members or the world.

Audit

On the etcd server node, get the etcd data directory, passed as an argument --data-dir , from the below command:

  1. ps -ef | grep etcd

Run the below command (based on the etcd data directory found above). For example,

  1. stat -c %a /var/lib/etcd

Verify that the permissions are 700 or more restrictive.

Remediation

Follow the steps as documented in 1.4.12 remediation.

1.4.12 - Ensure that the etcd data directory ownership is set to etcd:etcd

Profile Applicability

  • Level 1

Description

Ensure that the etcd data directory ownership is set to etcd:etcd.

Rationale

etcd is a highly-available key-value store used by Kubernetes deployments for persistent storage of all of its REST API objects. This data directory should be protected from any unauthorized reads or writes. It should be owned by etcd:etcd.

Audit

On a etcd server node, get the etcd data directory, passed as an argument --data-dir, from the below command:

  1. ps -ef | grep etcd

Run the below command (based on the etcd data directory found above). For example,

  1. stat -c %U:%G /var/lib/etcd

Verify that the ownership is set to etcd:etcd.

Remediation

  • On the etcd server node(s) add the etcd user:
  1. useradd -c "Etcd user" -d /var/lib/etcd etcd

Record the uid/gid:

  1. id etcd
  • Add the following to the RKE cluster.yml etcd section under services:
  1. services:
  2. etcd:
  3. uid: <etcd user uid recorded previously>
  4. gid: <etcd user gid recorded previously>

2.1 - Rancher HA Kubernetes Cluster Configuration via RKE

(See Appendix B. for full RKE cluster.yml example)

2.1.1 - Configure kubelet options

Profile Applicability

  • Level 1

Description

Ensure Kubelet options are configured to match CIS controls.

Rationale

To pass the following controls in the CIS benchmark, ensure the appropriate flags are passed to the Kubelet.

  • 2.1.1 - Ensure that the --anonymous-auth argument is set to false (Scored)
  • 2.1.2 - Ensure that the --authorization-mode argument is not set to AlwaysAllow (Scored)
  • 2.1.6 - Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Scored)
  • 2.1.7 - Ensure that the --protect-kernel-defaults argument is set to true (Scored)
  • 2.1.8 - Ensure that the --make-iptables-util-chains argument is set to true (Scored)
  • 2.1.10 - Ensure that the --event-qps argument is set to 0 (Scored)
  • 2.1.13 - Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)
  • 2.1.14 - Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Not Scored)

Audit

Inspect the Kubelet containers on all hosts and verify that they are running with the following options:

  • --streaming-connection-idle-timeout=<duration greater than 0>
  • --authorization-mode=Webhook
  • --protect-kernel-defaults=true
  • --make-iptables-util-chains=true
  • --event-qps=0
  • --anonymous-auth=false
  • --feature-gates="RotateKubeletServerCertificate=true"
  • --tls-cipher-suites="TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"

Remediation

  • Add the following to the RKE cluster.yml kubelet section under services:
  1. services:
  2. kubelet:
  3. generate_serving_certificate: true
  4. extra_args:
  5. feature-gates: "RotateKubeletServerCertificate=true"
  6. protect-kernel-defaults: "true"
  7. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"

Where <duration> is in a form like 1800s.

  • Reconfigure the cluster:
  1. rke up --config cluster.yml

2.1.2 - Configure kube-api options

Profile Applicability

  • Level 1

Description

Ensure the RKE configuration is set to deploy the kube-api service with the options required for controls.

NOTE:

Enabling the AlwaysPullImages admission control plugin can cause degraded performance due to overhead of always pulling images. Enabling the DenyEscalatingExec admission control plugin will prevent the ‘Launch kubectl’ functionality in the UI from working.

Rationale

To pass the following controls for the kube-api server ensure RKE configuration passes the appropriate options.

  • 1.1.1 - Ensure that the --anonymous-auth argument is set to false (Scored)
  • 1.1.8 - Ensure that the --profiling argument is set to false (Scored)
  • 1.1.11 - Ensure that the admission control plugin AlwaysPullImages is set (Scored)
  • 1.1.12 - Ensure that the admission control plugin DenyEscalatingExec is set (Scored)
  • 1.1.14 - Ensure that the admission control plugin NamespaceLifecycle is set (Scored)
  • 1.1.15 - Ensure that the --audit-log-path argument is set as appropriate (Scored)
  • 1.1.16 - Ensure that the --audit-log-maxage argument is set as appropriate (Scored)
  • 1.1.17 - Ensure that the --audit-log-maxbackup argument is set as appropriate (Scored)
  • 1.1.18 - Ensure that the --audit-log-maxsize argument is set as appropriate (Scored)
  • 1.1.23 - Ensure that the --service-account-lookup argument is set to true (Scored)
  • 1.1.24 - Ensure that the admission control plugin PodSecurityPolicy is set (Scored)
  • 1.1.30 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Not Scored)
  • 1.1.34 - Ensure that the --encryption-provider-config argument is set as appropriate (Scored)
  • 1.1.35 - Ensure that the encryption provider is set to aescbc (Scored)
  • 1.1.36 - Ensure that the admission control plugin EventRateLimit is set (Scored)
  • 1.1.37 - Ensure that the AdvancedAuditing argument is not set to false (Scored)

Audit

  • On nodes with the controlplane role inspect the kube-apiserver containers:

    1. docker inspect kube-apiserver
  • Look for the following options in the command section of the output:

  1. --anonymous-auth=false
  2. --profiling=false
  3. --service-account-lookup=true
  4. --enable-admission-plugins=ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy
  5. --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml
  6. --admission-control-config-file=/etc/kubernetes/admission.yaml
  7. --audit-log-path=/var/log/kube-audit/audit-log.json
  8. --audit-log-maxage=30
  9. --audit-log-maxbackup=10
  10. --audit-log-maxsize=100
  11. --audit-log-format=json
  12. --audit-policy-file=/etc/kubernetes/audit-policy.yaml
  13. --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  • In the volume section of the output ensure the bind mount is present:
  1. /var/log/kube-audit:/var/log/kube-audit

Remediation

  • In the RKE cluster.yml add the following directives to the kube-api section under services:
  1. services:
  2. kube_api:
  3. always_pull_images: true
  4. pod_security_policy: true
  5. service_node_port_range: 30000-32767
  6. event_rate_limit:
  7. enabled: true
  8. audit_log:
  9. enabled: true
  10. secrets_encryption_config:
  11. enabled: true
  12. extra_args:
  13. anonymous-auth: "false"
  14. enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy"
  15. profiling: "false"
  16. service-account-lookup: "true"
  17. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  18. extra_binds:
  19. - "/opt/kubernetes:/opt/kubernetes"

For k8s 1.14 enable-admission-plugins should be

  1. enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,PodSecurityPolicy,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,Priority,EventRateLimit"
  • Reconfigure the cluster:
  1. rke up --config cluster.yml

NOTE:

Files that are placed in /opt/kubernetes need to be mounted in using the extra_binds functionality in RKE.

2.1.3 - Configure scheduler options

Profile Applicability

  • Level 1

Description

Set the appropriate options for the Kubernetes scheduling service.

NOTE: Setting --address to 127.0.0.1 will prevent Rancher cluster monitoring from scraping this endpoint.

Rationale

To address the following controls on the CIS benchmark, the command line options should be set on the Kubernetes scheduler.

  • 1.2.1 - Ensure that the --profiling argument is set to false (Scored)
  • 1.2.2 - Ensure that the --address argument is set to 127.0.0.1 (Scored)

Audit

  • On nodes with the controlplane role: inspect the kube-scheduler containers:
  1. docker inspect kube-scheduler
  • Verify the following options are set in the command section.
  1. --profiling=false
  2. --address=127.0.0.1

Remediation

  • In the RKE cluster.yml file ensure the following options are set:
  1. services:
  2. scheduler:
  3. extra_args:
  4. profiling: "false"
  5. address: "127.0.0.1"
  • Reconfigure the cluster:
  1. rke up --config cluster.yml

2.1.4 - Configure controller options

Profile Applicability

  • Level 1

Description

Set the appropriate arguments on the Kubernetes controller manager.

5*NOTE:** Setting --address to 127.0.0.1 will prevent Rancher cluster monitoring from scraping this endpoint.

Rationale

To address the following controls the options need to be passed to the Kubernetes controller manager.

  • 1.3.1 - Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Scored)
  • 1.3.2 - Ensure that the --profiling argument is set to false (Scored)
  • 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)
  • 1.3.7 - Ensure that the --address argument is set to 127.0.0.1 (Scored)

Audit

  • On nodes with the controlplane role inspect the kube-controller-manager container:
  1. docker inspect kube-controller-manager
  • Verify the following options are set in the command section:
  1. --terminated-pod-gc-threshold=1000
  2. --profiling=false
  3. --address=127.0.0.1
  4. --feature-gates="RotateKubeletServerCertificate=true"

Remediation

  • In the RKE cluster.yml file ensure the following options are set:
  1. services:
  2. kube-controller:
  3. extra_args:
  4. profiling: "false"
  5. address: "127.0.0.1"
  6. terminated-pod-gc-threshold: "1000"
  7. feature-gates: "RotateKubeletServerCertificate=true"
  • Reconfigure the cluster:
  1. rke up --config cluster.yml

2.1.5 - Configure addons and PSPs

Profile Applicability

  • Level 1

Description

Configure a restrictive pod security policy (PSP) as the default and create role bindings for system level services to use the less restrictive default PSP.

Rationale

To address the following controls, a restrictive default PSP needs to be applied as the default. Role bindings need to be in place to allow system services to still function.

  • 1.7.1 - Do not admit privileged containers (Not Scored)
  • 1.7.2 - Do not admit containers wishing to share the host process ID namespace (Not Scored)
  • 1.7.3 - Do not admit containers wishing to share the host IPC namespace (Not Scored)
  • 1.7.4 - Do not admit containers wishing to share the host network namespace (Not Scored)
  • 1.7.5 - Do not admit containers with allowPrivilegeEscalation (Not Scored)
  • 1.7.6 - Do not admit root containers (Not Scored)
  • 1.7.7 - Do not admit containers with dangerous capabilities (Not Scored)

Audit

  • Verify that the cattle-system namespace exists:
  1. kubectl get ns |grep cattle
  • Verify that the roles exist:
  1. kubectl get role default-psp-role -n ingress-nginx
  2. kubectl get role default-psp-role -n cattle-system
  3. kubectl get clusterrole restricted-clusterrole
  • Verify the bindings are set correctly:
  1. kubectl get rolebinding -n ingress-nginx default-psp-rolebinding
  2. kubectl get rolebinding -n cattle-system default-psp-rolebinding
  3. kubectl get clusterrolebinding restricted-clusterrolebinding
  • Verify the restricted PSP is present.
  1. kubectl get psp restricted-psp

Remediation

  • In the RKE cluster.yml file ensure the following options are set:
  1. addons: |
  2. apiVersion: rbac.authorization.k8s.io/v1
  3. kind: Role
  4. metadata:
  5. name: default-psp-role
  6. namespace: ingress-nginx
  7. rules:
  8. - apiGroups:
  9. - extensions
  10. resourceNames:
  11. - default-psp
  12. resources:
  13. - podsecuritypolicies
  14. verbs:
  15. - use
  16. ---
  17. apiVersion: rbac.authorization.k8s.io/v1
  18. kind: RoleBinding
  19. metadata:
  20. name: default-psp-rolebinding
  21. namespace: ingress-nginx
  22. roleRef:
  23. apiGroup: rbac.authorization.k8s.io
  24. kind: Role
  25. name: default-psp-role
  26. subjects:
  27. - apiGroup: rbac.authorization.k8s.io
  28. kind: Group
  29. name: system:serviceaccounts
  30. - apiGroup: rbac.authorization.k8s.io
  31. kind: Group
  32. name: system:authenticated
  33. ---
  34. apiVersion: v1
  35. kind: Namespace
  36. metadata:
  37. name: cattle-system
  38. ---
  39. apiVersion: rbac.authorization.k8s.io/v1
  40. kind: Role
  41. metadata:
  42. name: default-psp-role
  43. namespace: cattle-system
  44. rules:
  45. - apiGroups:
  46. - extensions
  47. resourceNames:
  48. - default-psp
  49. resources:
  50. - podsecuritypolicies
  51. verbs:
  52. - use
  53. ---
  54. apiVersion: rbac.authorization.k8s.io/v1
  55. kind: RoleBinding
  56. metadata:
  57. name: default-psp-rolebinding
  58. namespace: cattle-system
  59. roleRef:
  60. apiGroup: rbac.authorization.k8s.io
  61. kind: Role
  62. name: default-psp-role
  63. subjects:
  64. - apiGroup: rbac.authorization.k8s.io
  65. kind: Group
  66. name: system:serviceaccounts
  67. - apiGroup: rbac.authorization.k8s.io
  68. kind: Group
  69. name: system:authenticated
  70. ---
  71. apiVersion: extensions/v1beta1
  72. kind: PodSecurityPolicy
  73. metadata:
  74. name: restricted-psp
  75. spec:
  76. requiredDropCapabilities:
  77. - NET_RAW
  78. privileged: false
  79. allowPrivilegeEscalation: false
  80. defaultAllowPrivilegeEscalation: false
  81. fsGroup:
  82. rule: RunAsAny
  83. runAsUser:
  84. rule: MustRunAsNonRoot
  85. seLinux:
  86. rule: RunAsAny
  87. supplementalGroups:
  88. rule: RunAsAny
  89. volumes:
  90. - emptyDir
  91. - secret
  92. - persistentVolumeClaim
  93. - downwardAPI
  94. - configMap
  95. - projected
  96. ---
  97. apiVersion: rbac.authorization.k8s.io/v1
  98. kind: ClusterRole
  99. metadata:
  100. name: restricted-clusterrole
  101. rules:
  102. - apiGroups:
  103. - extensions
  104. resourceNames:
  105. - restricted-psp
  106. resources:
  107. - podsecuritypolicies
  108. verbs:
  109. - use
  110. ---
  111. apiVersion: rbac.authorization.k8s.io/v1
  112. kind: ClusterRoleBinding
  113. metadata:
  114. name: restricted-clusterrolebinding
  115. roleRef:
  116. apiGroup: rbac.authorization.k8s.io
  117. kind: ClusterRole
  118. name: restricted-clusterrole
  119. subjects:
  120. - apiGroup: rbac.authorization.k8s.io
  121. kind: Group
  122. name: system:serviceaccounts
  123. - apiGroup: rbac.authorization.k8s.io
  124. kind: Group
  125. name: system:authenticated
  • Reconfigure the cluster:
  1. rke up --config cluster.yml

3.1 - Rancher Management Control Plane Installation

3.1.1 - Disable the local cluster option

Profile Applicability

  • Level 2

Description

When deploying Rancher, disable the local cluster option on the Rancher Server.

NOTE: This requires Rancher v2.1.2 or above.

Rationale

Having access to the local cluster from the Rancher UI is convenient for troubleshooting and debugging; however, if the local cluster is enabled in the Rancher UI, a user has access to all elements of the system, including the Rancher management server itself. Disabling the local cluster is a defense in depth measure and removes the possible attack vector from the Rancher UI and API.

Audit

  • Verify the Rancher deployment has the --add-local=false option set.
  1. kubectl get deployment rancher -n cattle-system -o yaml |grep 'add-local'
  • In the Rancher UI go to Clusters in the Global view and verify that no local cluster is present.

Remediation

  • While upgrading or installing Rancher 2.3.3 or above, provide the following flag:
  1. --set addLocal="false"

3.1.2 - Enable Rancher Audit logging

Profile Applicability

  • Level 1

Description

Enable Rancher’s built-in audit logging capability.

Rationale

Tracking down what actions were performed by users in Rancher can provide insight during post mortems, and if monitored proactively can be used to quickly detect malicious actions.

Audit

  • Verify that the audit log parameters were passed into the Rancher deployment.
  1. kubectl get deployment rancher -n cattle-system -o yaml | grep auditLog
  • Verify that the log is going to the appropriate destination, as set by auditLog.destination

    • sidecar:

      1. List pods:

        1. kubectl get pods -n cattle-system
      2. Tail logs:

        1. kubectl logs <pod> -n cattle-system -c rancher-audit-log
    • hostPath

      1. On the worker nodes running the Rancher pods, verify that the log files are being written to the destination indicated in auditlog.hostPath.

Remediation

Upgrade the Rancher server installation using Helm, and configure the audit log settings. The instructions for doing so can be found in the reference section below.

Reference

3.2 - Rancher Management Control Plane Authentication

3.2.1 - Change the local admin password from the default value

Profile Applicability

  • Level 1

Description

The local admin password should be changed from the default.

Rationale

The default admin password is common across all Rancher installations and should be changed immediately upon startup.

Audit

Attempt to login into the UI with the following credentials:

  • Username: admin
  • Password: admin

The login attempt must not succeed.

Remediation

Change the password from admin to a password that meets the recommended password standards for your organization.

3.2.2 - Configure an Identity Provider for Authentication

Profile Applicability

  • Level 1

Description

When running Rancher in a production environment, configure an identity provider for authentication.

Rationale

Rancher supports several authentication backends that are common in enterprises. It is recommended to tie Rancher into an external authentication system to simplify user and group access in the Rancher cluster. Doing so assures that access control follows the organization’s change management process for user accounts.

Audit

  • In the Rancher UI, select Global
  • Select Security
  • Select Authentication
  • Ensure the authentication provider for your environment is active and configured correctly

Remediation

Configure the appropriate authentication provider for your Rancher installation according to the documentation found at the link in the reference section below.

Reference

3.3 - Rancher Management Control Plane RBAC

3.3.1 - Ensure that administrator privileges are only granted to those who require them

Profile Applicability

  • Level 1

Description

Restrict administrator access to only those responsible for managing and operating the Rancher server.

Rationale

The admin privilege level gives the user the highest level of access to the Rancher server and all attached clusters. This privilege should only be granted to a few people who are responsible for the availability and support of Rancher and the clusters that it manages.

Audit

The following script uses the Rancher API to show users with administrator privileges:

  1. #!/bin/bash
  2. for i in $(curl -sk -u 'token-<id>:<secret>' https://<RANCHER_URL>/v3/users|jq -r .data[].links.globalRoleBindings); do
  3. curl -sk -u 'token-<id>:<secret>' $i| jq '.data[] | "\(.userId) \(.globalRoleId)"'
  4. done

The admin role should only be assigned to users that require administrative privileges. Any role that is not admin or user should be audited in the RBAC section of the UI to ensure that the privileges adhere to policies for global access.

The Rancher server permits customization of the default global permissions. We recommend that auditors also review the policies of any custom global roles.

Remediation

Remove the admin role from any user that does not require administrative privileges.

3.4 - Rancher Management Control Plane Configuration

3.4.1 - Ensure only approved node drivers are active

Profile Applicability

  • Level 1

Description

Ensure that node drivers that are not needed or approved are not active in the Rancher console.

Rationale

Node drivers are used to provision compute nodes in various cloud providers and local IaaS infrastructure. For convenience, popular cloud providers are enabled by default. If the organization does not intend to use these or does not allow users to provision resources in certain providers, the drivers should be disabled. This will prevent users from using Rancher resources to provision the nodes.

Audit

  • In the Rancher UI select Global
  • Select Node Drivers
  • Review the list of node drivers that are in an Active state.

Remediation

If a disallowed node driver is active, visit the Node Drivers page under Global and disable it.

4.1 - Rancher Kubernetes Custom Cluster Configuration via RKE

(See Appendix C. for full RKE template example)

4.1.1 - Configure kubelet options

Profile Applicability

  • Level 1

Description

Ensure Kubelet options are configured to match CIS controls.

Rationale

To pass the following controls in the CIS benchmark, ensure the appropriate flags are passed to the Kubelet.

  • 2.1.1 - Ensure that the --anonymous-auth argument is set to false (Scored)
  • 2.1.2 - Ensure that the --authorization-mode argument is not set to AlwaysAllow (Scored)
  • 2.1.6 - Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Scored)
  • 2.1.7 - Ensure that the --protect-kernel-defaults argument is set to true (Scored)
  • 2.1.8 - Ensure that the --make-iptables-util-chains argument is set to true (Scored)
  • 2.1.10 - Ensure that the --event-qps argument is set to 0 (Scored)
  • 2.1.13 - Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)
  • 2.1.14 - Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Not Scored)

Audit

Inspect the Kubelet containers on all hosts and verify that they are running with the following options:

  • --streaming-connection-idle-timeout=<duration greater than 0>
  • --authorization-mode=Webhook
  • --protect-kernel-defaults=true
  • --make-iptables-util-chains=true
  • --event-qps=0
  • --anonymous-auth=false
  • --feature-gates="RotateKubeletServerCertificate=true"
  • --tls-cipher-suites="TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"

Remediation

  • Add the following to the RKE cluster.yml kubelet section under services:
  1. services:
  2. kubelet:
  3. generate_serving_certificate: true
  4. extra_args:
  5. feature-gates: "RotateKubeletServerCertificate=true"
  6. protect-kernel-defaults: "true"
  7. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"

Where <duration> is in a form like 1800s.

  • Reconfigure the cluster:
  1. rke up --config cluster.yml

4.1.2 - Configure kube-api options

Profile Applicability

  • Level 1

Description

Ensure the RKE configuration is set to deploy the kube-api service with the options required for controls.

NOTE:

Enabling the AlwaysPullImages admission control plugin can cause degraded performance due to overhead of always pulling images. Enabling the DenyEscalatingExec admission control plugin will prevent the ‘Launch kubectl’ functionality in the UI from working.

Rationale

To pass the following controls for the kube-api server ensure RKE configuration passes the appropriate options.

  • 1.1.1 - Ensure that the --anonymous-auth argument is set to false (Scored)
  • 1.1.8 - Ensure that the --profiling argument is set to false (Scored)
  • 1.1.11 - Ensure that the admission control plugin AlwaysPullImages is set (Scored)
  • 1.1.12 - Ensure that the admission control plugin DenyEscalatingExec is set (Scored)
  • 1.1.14 - Ensure that the admission control plugin NamespaceLifecycle is set (Scored)
  • 1.1.15 - Ensure that the --audit-log-path argument is set as appropriate (Scored)
  • 1.1.16 - Ensure that the --audit-log-maxage argument is set as appropriate (Scored)
  • 1.1.17 - Ensure that the --audit-log-maxbackup argument is set as appropriate (Scored)
  • 1.1.18 - Ensure that the --audit-log-maxsize argument is set as appropriate (Scored)
  • 1.1.23 - Ensure that the --service-account-lookup argument is set to true (Scored)
  • 1.1.24 - Ensure that the admission control plugin PodSecurityPolicy is set (Scored)
  • 1.1.30 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Not Scored)
  • 1.1.34 - Ensure that the --encryption-provider-config argument is set as appropriate (Scored)
  • 1.1.35 - Ensure that the encryption provider is set to aescbc (Scored)
  • 1.1.36 - Ensure that the admission control plugin EventRateLimit is set (Scored)
  • 1.1.37 - Ensure that the AdvancedAuditing argument is not set to false (Scored)

Audit

  • On nodes with the controlplane role inspect the kube-apiserver containers:

    1. docker inspect kube-apiserver
  • Look for the following options in the command section of the output:

  1. --anonymous-auth=false
  2. --profiling=false
  3. --service-account-lookup=true
  4. --enable-admission-plugins=ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy
  5. --encryption-provider-config=/etc/kubernetes/ssl/encryption.yaml
  6. --admission-control-config-file=/etc/kubernetes/admission.yaml
  7. --audit-log-path=/var/log/kube-audit/audit-log.json
  8. --audit-log-maxage=30
  9. --audit-log-maxbackup=10
  10. --audit-log-maxsize=100
  11. --audit-log-format=json
  12. --audit-policy-file=/etc/kubernetes/audit-policy.yaml
  13. --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  • In the volume section of the output ensure the bind mount is present:
  1. /var/log/kube-audit:/var/log/kube-audit

Remediation

  • In the RKE cluster.yml add the following directives to the kube-api section under services:
  1. services:
  2. kube_api:
  3. always_pull_images: true
  4. pod_security_policy: true
  5. service_node_port_range: 30000-32767
  6. event_rate_limit:
  7. enabled: true
  8. audit_log:
  9. enabled: true
  10. secrets_encryption_config:
  11. enabled: true
  12. extra_args:
  13. anonymous-auth: "false"
  14. enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy"
  15. profiling: "false"
  16. service-account-lookup: "true"
  17. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  18. extra_binds:
  19. - "/opt/kubernetes:/opt/kubernetes"

For k8s 1.14 enable-admission-plugins should be

  1. enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,PodSecurityPolicy,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,Priority,EventRateLimit"
  • Reconfigure the cluster:
  1. rke up --config cluster.yml

NOTE:

Files that are placed in /opt/kubernetes need to be mounted in using the extra_binds functionality in RKE.

4.1.3 - Configure scheduler options

Profile Applicability

  • Level 1

Description

Set the appropriate options for the Kubernetes scheduling service.

NOTE: Setting --address to 127.0.0.1 will prevent Rancher cluster monitoring from scraping this endpoint.

Rationale

To address the following controls on the CIS benchmark, the command line options should be set on the Kubernetes scheduler.

  • 1.2.1 - Ensure that the --profiling argument is set to false (Scored)
  • 1.2.2 - Ensure that the --address argument is set to 127.0.0.1 (Scored)

Audit

  • On nodes with the controlplane role: inspect the kube-scheduler containers:
  1. docker inspect kube-scheduler
  • Verify the following options are set in the command section.
  1. --profiling=false
  2. --address=127.0.0.1

Remediation

  • In the RKE cluster.yml file ensure the following options are set:
  1. services:
  2. scheduler:
  3. extra_args:
  4. profiling: "false"
  5. address: "127.0.0.1"
  • Reconfigure the cluster:
  1. rke up --config cluster.yml

4.1.4 - Configure controller options

Profile Applicability

  • Level 1

Description

Set the appropriate arguments on the Kubernetes controller manager.

5*NOTE:** Setting --address to 127.0.0.1 will prevent Rancher cluster monitoring from scraping this endpoint.

Rationale

To address the following controls the options need to be passed to the Kubernetes controller manager.

  • 1.3.1 - Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Scored)
  • 1.3.2 - Ensure that the --profiling argument is set to false (Scored)
  • 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)
  • 1.3.7 - Ensure that the --address argument is set to 127.0.0.1 (Scored)

Audit

  • On nodes with the controlplane role inspect the kube-controller-manager container:
  1. docker inspect kube-controller-manager
  • Verify the following options are set in the command section:
  1. --terminated-pod-gc-threshold=1000
  2. --profiling=false
  3. --address=127.0.0.1
  4. --feature-gates="RotateKubeletServerCertificate=true"

Remediation

  • In the RKE cluster.yml file ensure the following options are set:
  1. services:
  2. kube-controller:
  3. extra_args:
  4. profiling: "false"
  5. address: "127.0.0.1"
  6. terminated-pod-gc-threshold: "1000"
  7. feature-gates: "RotateKubeletServerCertificate=true"
  • Reconfigure the cluster:
  1. rke up --config cluster.yml

4.1.5 - Check PSPs

Profile Applicability

  • Level 1

Description

Configure a restrictive pod security policy (PSP) as the default and create role bindings for system level services to use the less restrictive default PSP.

Rationale

To address the following controls, a restrictive default PSP needs to be applied as the default. Role bindings need to be in place to allow system services to still function.

  • 1.7.1 - Do not admit privileged containers (Not Scored)
  • 1.7.2 - Do not admit containers wishing to share the host process ID namespace (Not Scored)
  • 1.7.3 - Do not admit containers wishing to share the host IPC namespace (Not Scored)
  • 1.7.4 - Do not admit containers wishing to share the host network namespace (Not Scored)
  • 1.7.5 - Do not admit containers with allowPrivilegeEscalation (Not Scored)
  • 1.7.6 - Do not admit root containers (Not Scored)
  • 1.7.7 - Do not admit containers with dangerous capabilities (Not Scored)

Audit

  • Verify that the cattle-system namespace exists:
  1. kubectl get ns |grep cattle
  • Verify that the roles exist:
  1. kubectl get role default-psp-role -n ingress-nginx
  2. kubectl get role default-psp-role -n cattle-system
  3. kubectl get clusterrole restricted-clusterrole
  • Verify the bindings are set correctly:
  1. kubectl get rolebinding -n ingress-nginx default-psp-rolebinding
  2. kubectl get rolebinding -n cattle-system default-psp-rolebinding
  • Verify the restricted PSP is present.
  1. kubectl get psp restricted-psp

Appendix A - Complete ubuntu cloud-config Example

cloud-config file to automate hardening manual steps on nodes deployment.

  1. #cloud-config
  2. bootcmd:
  3. - apt-get update
  4. - apt-get install -y apt-transport-https
  5. apt:
  6. sources:
  7. docker:
  8. source: "deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable"
  9. keyid: 0EBFCD88
  10. packages:
  11. - [docker-ce, '5:19.03.5~3-0~ubuntu-bionic']
  12. - jq
  13. write_files:
  14. # 1.1.1 - Configure default sysctl settings on all hosts
  15. - path: /etc/sysctl.d/90-kubelet.conf
  16. owner: root:root
  17. permissions: '0644'
  18. content: |
  19. vm.overcommit_memory=1
  20. vm.panic_on_oom=0
  21. kernel.panic=10
  22. kernel.panic_on_oops=1
  23. kernel.keys.root_maxkeys=1000000
  24. kernel.keys.root_maxbytes=25000000
  25. # 1.4.12 etcd user
  26. groups:
  27. - etcd
  28. users:
  29. - default
  30. - name: etcd
  31. gecos: Etcd user
  32. primary_group: etcd
  33. homedir: /var/lib/etcd
  34. # 1.4.11 etcd data dir
  35. runcmd:
  36. - chmod 0700 /var/lib/etcd
  37. - usermod -G docker -a ubuntu
  38. - sysctl -p /etc/sysctl.d/90-kubelet.conf

Appendix B - Complete RKE cluster.yml Example

Before apply, replace rancher_kubernetes_engine_config.services.etcd.gid and rancher_kubernetes_engine_config.services.etcd.uid with the proper etcd group and user ids that were created on etcd nodes.

{{% accordion id=”cluster-1.14” label=”RKE yaml for k8s 1.14” %}}

  1. nodes:
  2. - address: 18.191.190.205
  3. internal_address: 172.31.24.213
  4. user: ubuntu
  5. role: ["controlplane", "etcd", "worker"]
  6. - address: 18.191.190.203
  7. internal_address: 172.31.24.203
  8. user: ubuntu
  9. role: ["controlplane", "etcd", "worker"]
  10. - address: 18.191.190.10
  11. internal_address: 172.31.24.244
  12. user: ubuntu
  13. role: ["controlplane", "etcd", "worker"]
  14. addon_job_timeout: 30
  15. authentication:
  16. strategy: x509
  17. authorization: {}
  18. bastion_host:
  19. ssh_agent_auth: false
  20. cloud_provider: {}
  21. ignore_docker_version: true
  22. #
  23. # # Currently only nginx ingress provider is supported.
  24. # # To disable ingress controller, set `provider: none`
  25. # # To enable ingress on specific nodes, use the node_selector, eg:
  26. # provider: nginx
  27. # node_selector:
  28. # app: ingress
  29. #
  30. ingress:
  31. provider: nginx
  32. kubernetes_version: v1.14.9-rancher1-1
  33. monitoring:
  34. provider: metrics-server
  35. #
  36. # If you are using calico on AWS
  37. #
  38. # network:
  39. # plugin: calico
  40. # calico_network_provider:
  41. # cloud_provider: aws
  42. #
  43. # # To specify flannel interface
  44. #
  45. # network:
  46. # plugin: flannel
  47. # flannel_network_provider:
  48. # iface: eth1
  49. #
  50. # # To specify flannel interface for canal plugin
  51. #
  52. # network:
  53. # plugin: canal
  54. # canal_network_provider:
  55. # iface: eth1
  56. #
  57. network:
  58. options:
  59. flannel_backend_type: vxlan
  60. plugin: canal
  61. restore:
  62. restore: false
  63. #
  64. # services:
  65. # kube-api:
  66. # service_cluster_ip_range: 10.43.0.0/16
  67. # kube-controller:
  68. # cluster_cidr: 10.42.0.0/16
  69. # service_cluster_ip_range: 10.43.0.0/16
  70. # kubelet:
  71. # cluster_domain: cluster.local
  72. # cluster_dns_server: 10.43.0.10
  73. #
  74. services:
  75. etcd:
  76. backup_config:
  77. enabled: true
  78. interval_hours: 12
  79. retention: 6
  80. safe_timestamp: false
  81. creation: 12h
  82. extra_args:
  83. election-timeout: "5000"
  84. heartbeat-interval: "500"
  85. gid: 1000
  86. retention: 72h
  87. snapshot: false
  88. uid: 1000
  89. kube-api:
  90. always_pull_images: true
  91. audit_log:
  92. enabled: true
  93. event_rate_limit:
  94. enabled: true
  95. extra_args:
  96. anonymous-auth: "false"
  97. enable-admission-plugins: >-
  98. ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,PodSecurityPolicy,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,Priority,EventRateLimit
  99. profiling: "false"
  100. service-account-lookup: "true"
  101. tls-cipher-suites: >-
  102. TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  103. extra_binds:
  104. - "/opt/kubernetes:/opt/kubernetes"
  105. pod_security_policy: true
  106. secrets_encryption_config:
  107. enabled: true
  108. service_node_port_range: 30000-32767
  109. kube-controller:
  110. extra_args:
  111. address: 127.0.0.1
  112. feature-gates: RotateKubeletServerCertificate=true
  113. profiling: "false"
  114. terminated-pod-gc-threshold: "1000"
  115. kubelet:
  116. extra_args:
  117. protect-kernel-defaults: "true"
  118. fail_swap_on: false
  119. generate_serving_certificate: true
  120. kubeproxy: {}
  121. scheduler:
  122. extra_args:
  123. address: 127.0.0.1
  124. profiling: "false"
  125. ssh_agent_auth: false

{{% /accordion %}}

{{% accordion id=”cluster-1.15” label=”RKE yaml for k8s 1.15” %}}

  1. nodes:
  2. - address: 18.191.190.205
  3. internal_address: 172.31.24.213
  4. user: ubuntu
  5. role: ["controlplane", "etcd", "worker"]
  6. - address: 18.191.190.203
  7. internal_address: 172.31.24.203
  8. user: ubuntu
  9. role: ["controlplane", "etcd", "worker"]
  10. - address: 18.191.190.10
  11. internal_address: 172.31.24.244
  12. user: ubuntu
  13. role: ["controlplane", "etcd", "worker"]
  14. addon_job_timeout: 30
  15. authentication:
  16. strategy: x509
  17. ignore_docker_version: true
  18. #
  19. # # Currently only nginx ingress provider is supported.
  20. # # To disable ingress controller, set `provider: none`
  21. # # To enable ingress on specific nodes, use the node_selector, eg:
  22. # provider: nginx
  23. # node_selector:
  24. # app: ingress
  25. #
  26. ingress:
  27. provider: nginx
  28. kubernetes_version: v1.15.6-rancher1-2
  29. monitoring:
  30. provider: metrics-server
  31. #
  32. # If you are using calico on AWS
  33. #
  34. # network:
  35. # plugin: calico
  36. # calico_network_provider:
  37. # cloud_provider: aws
  38. #
  39. # # To specify flannel interface
  40. #
  41. # network:
  42. # plugin: flannel
  43. # flannel_network_provider:
  44. # iface: eth1
  45. #
  46. # # To specify flannel interface for canal plugin
  47. #
  48. # network:
  49. # plugin: canal
  50. # canal_network_provider:
  51. # iface: eth1
  52. #
  53. network:
  54. options:
  55. flannel_backend_type: vxlan
  56. plugin: canal
  57. #
  58. # services:
  59. # kube-api:
  60. # service_cluster_ip_range: 10.43.0.0/16
  61. # kube-controller:
  62. # cluster_cidr: 10.42.0.0/16
  63. # service_cluster_ip_range: 10.43.0.0/16
  64. # kubelet:
  65. # cluster_domain: cluster.local
  66. # cluster_dns_server: 10.43.0.10
  67. #
  68. services:
  69. etcd:
  70. backup_config:
  71. enabled: true
  72. interval_hours: 12
  73. retention: 6
  74. safe_timestamp: false
  75. creation: 12h
  76. extra_args:
  77. election-timeout: 5000
  78. heartbeat-interval: 500
  79. gid: 1000
  80. retention: 72h
  81. snapshot: false
  82. uid: 1000
  83. kube_api:
  84. always_pull_images: true
  85. pod_security_policy: true
  86. service_node_port_range: 30000-32767
  87. event_rate_limit:
  88. enabled: true
  89. audit_log:
  90. enabled: true
  91. secrets_encryption_config:
  92. enabled: true
  93. extra_args:
  94. anonymous-auth: "false"
  95. enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy"
  96. profiling: "false"
  97. service-account-lookup: "true"
  98. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  99. extra_binds:
  100. - "/opt/kubernetes:/opt/kubernetes"
  101. kubelet:
  102. generate_serving_certificate: true
  103. extra_args:
  104. feature-gates: "RotateKubeletServerCertificate=true"
  105. protect-kernel-defaults: "true"
  106. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  107. kube-controller:
  108. extra_args:
  109. profiling: "false"
  110. address: "127.0.0.1"
  111. terminated-pod-gc-threshold: "1000"
  112. feature-gates: "RotateKubeletServerCertificate=true"
  113. scheduler:
  114. extra_args:
  115. profiling: "false"
  116. address: "127.0.0.1"
  117. ssh_agent_auth: false

{{% /accordion %}}

{{% accordion id=”cluster-1.16” label=”RKE yaml for k8s 1.16” %}}

  1. nodes:
  2. - address: 18.191.190.205
  3. internal_address: 172.31.24.213
  4. user: ubuntu
  5. role: ["controlplane", "etcd", "worker"]
  6. - address: 18.191.190.203
  7. internal_address: 172.31.24.203
  8. user: ubuntu
  9. role: ["controlplane", "etcd", "worker"]
  10. - address: 18.191.190.10
  11. internal_address: 172.31.24.244
  12. user: ubuntu
  13. role: ["controlplane", "etcd", "worker"]
  14. addon_job_timeout: 30
  15. authentication:
  16. strategy: x509
  17. ignore_docker_version: true
  18. #
  19. # # Currently only nginx ingress provider is supported.
  20. # # To disable ingress controller, set `provider: none`
  21. # # To enable ingress on specific nodes, use the node_selector, eg:
  22. # provider: nginx
  23. # node_selector:
  24. # app: ingress
  25. #
  26. ingress:
  27. provider: nginx
  28. kubernetes_version: v1.16.3-rancher1-1
  29. monitoring:
  30. provider: metrics-server
  31. #
  32. # If you are using calico on AWS
  33. #
  34. # network:
  35. # plugin: calico
  36. # calico_network_provider:
  37. # cloud_provider: aws
  38. #
  39. # # To specify flannel interface
  40. #
  41. # network:
  42. # plugin: flannel
  43. # flannel_network_provider:
  44. # iface: eth1
  45. #
  46. # # To specify flannel interface for canal plugin
  47. #
  48. # network:
  49. # plugin: canal
  50. # canal_network_provider:
  51. # iface: eth1
  52. #
  53. network:
  54. options:
  55. flannel_backend_type: vxlan
  56. plugin: canal
  57. #
  58. # services:
  59. # kube-api:
  60. # service_cluster_ip_range: 10.43.0.0/16
  61. # kube-controller:
  62. # cluster_cidr: 10.42.0.0/16
  63. # service_cluster_ip_range: 10.43.0.0/16
  64. # kubelet:
  65. # cluster_domain: cluster.local
  66. # cluster_dns_server: 10.43.0.10
  67. #
  68. services:
  69. etcd:
  70. backup_config:
  71. enabled: true
  72. interval_hours: 12
  73. retention: 6
  74. safe_timestamp: false
  75. creation: 12h
  76. extra_args:
  77. election-timeout: 5000
  78. heartbeat-interval: 500
  79. gid: 1000
  80. retention: 72h
  81. snapshot: false
  82. uid: 1000
  83. kube_api:
  84. always_pull_images: true
  85. pod_security_policy: true
  86. service_node_port_range: 30000-32767
  87. event_rate_limit:
  88. enabled: true
  89. audit_log:
  90. enabled: true
  91. secrets_encryption_config:
  92. enabled: true
  93. extra_args:
  94. anonymous-auth: "false"
  95. enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy"
  96. profiling: "false"
  97. service-account-lookup: "true"
  98. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  99. extra_binds:
  100. - "/opt/kubernetes:/opt/kubernetes"
  101. kubelet:
  102. generate_serving_certificate: true
  103. extra_args:
  104. feature-gates: "RotateKubeletServerCertificate=true"
  105. protect-kernel-defaults: "true"
  106. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  107. kube-controller:
  108. extra_args:
  109. profiling: "false"
  110. address: "127.0.0.1"
  111. terminated-pod-gc-threshold: "1000"
  112. feature-gates: "RotateKubeletServerCertificate=true"
  113. scheduler:
  114. extra_args:
  115. profiling: "false"
  116. address: "127.0.0.1"
  117. ssh_agent_auth: false

{{% /accordion %}}

Appendix C - Complete RKE Template Example

Before apply, replace rancher_kubernetes_engine_config.services.etcd.gid and rancher_kubernetes_engine_config.services.etcd.uid with the proper etcd group and user ids that were created on etcd nodes.

{{% accordion id=”k8s-1.14” label=”RKE template for k8s 1.14” %}}

  1. #
  2. # Cluster Config
  3. #
  4. answers: {}
  5. default_pod_security_policy_template_id: restricted
  6. docker_root_dir: /var/lib/docker
  7. enable_cluster_alerting: false
  8. enable_cluster_monitoring: false
  9. enable_network_policy: false
  10. local_cluster_auth_endpoint:
  11. enabled: false
  12. name: test-35378
  13. #
  14. # Rancher Config
  15. #
  16. rancher_kubernetes_engine_config:
  17. addon_job_timeout: 30
  18. authentication:
  19. strategy: x509
  20. authorization: {}
  21. bastion_host:
  22. ssh_agent_auth: false
  23. cloud_provider: {}
  24. ignore_docker_version: true
  25. #
  26. # # Currently only nginx ingress provider is supported.
  27. # # To disable ingress controller, set `provider: none`
  28. # # To enable ingress on specific nodes, use the node_selector, eg:
  29. # provider: nginx
  30. # node_selector:
  31. # app: ingress
  32. #
  33. ingress:
  34. provider: nginx
  35. kubernetes_version: v1.14.9-rancher1-1
  36. monitoring:
  37. provider: metrics-server
  38. #
  39. # If you are using calico on AWS
  40. #
  41. # network:
  42. # plugin: calico
  43. # calico_network_provider:
  44. # cloud_provider: aws
  45. #
  46. # # To specify flannel interface
  47. #
  48. # network:
  49. # plugin: flannel
  50. # flannel_network_provider:
  51. # iface: eth1
  52. #
  53. # # To specify flannel interface for canal plugin
  54. #
  55. # network:
  56. # plugin: canal
  57. # canal_network_provider:
  58. # iface: eth1
  59. #
  60. network:
  61. options:
  62. flannel_backend_type: vxlan
  63. plugin: canal
  64. restore:
  65. restore: false
  66. #
  67. # services:
  68. # kube-api:
  69. # service_cluster_ip_range: 10.43.0.0/16
  70. # kube-controller:
  71. # cluster_cidr: 10.42.0.0/16
  72. # service_cluster_ip_range: 10.43.0.0/16
  73. # kubelet:
  74. # cluster_domain: cluster.local
  75. # cluster_dns_server: 10.43.0.10
  76. #
  77. services:
  78. etcd:
  79. backup_config:
  80. enabled: true
  81. interval_hours: 12
  82. retention: 6
  83. safe_timestamp: false
  84. creation: 12h
  85. extra_args:
  86. election-timeout: "5000"
  87. heartbeat-interval: "500"
  88. gid: 1000
  89. retention: 72h
  90. snapshot: false
  91. uid: 1000
  92. kube-api:
  93. always_pull_images: true
  94. audit_log:
  95. enabled: true
  96. event_rate_limit:
  97. enabled: true
  98. extra_args:
  99. anonymous-auth: "false"
  100. enable-admission-plugins: >-
  101. ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,PodSecurityPolicy,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,Priority,EventRateLimit
  102. profiling: "false"
  103. service-account-lookup: "true"
  104. tls-cipher-suites: >-
  105. TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
  106. extra_binds:
  107. - "/opt/kubernetes:/opt/kubernetes"
  108. pod_security_policy: true
  109. secrets_encryption_config:
  110. enabled: true
  111. service_node_port_range: 30000-32767
  112. kube-controller:
  113. extra_args:
  114. address: 127.0.0.1
  115. feature-gates: RotateKubeletServerCertificate=true
  116. profiling: "false"
  117. terminated-pod-gc-threshold: "1000"
  118. kubelet:
  119. extra_args:
  120. protect-kernel-defaults: "true"
  121. fail_swap_on: false
  122. generate_serving_certificate: true
  123. kubeproxy: {}
  124. scheduler:
  125. extra_args:
  126. address: 127.0.0.1
  127. profiling: "false"
  128. ssh_agent_auth: false
  129. windows_prefered_cluster: false

{{% /accordion %}}

{{% accordion id=”k8s-1.15” label=”RKE template for k8s 1.15” %}}

  1. #
  2. # Cluster Config
  3. #
  4. default_pod_security_policy_template_id: restricted
  5. docker_root_dir: /var/lib/docker
  6. enable_cluster_alerting: false
  7. enable_cluster_monitoring: false
  8. enable_network_policy: false
  9. local_cluster_auth_endpoint:
  10. enabled: true
  11. #
  12. # Rancher Config
  13. #
  14. rancher_kubernetes_engine_config:
  15. addon_job_timeout: 30
  16. authentication:
  17. strategy: x509
  18. ignore_docker_version: true
  19. #
  20. # # Currently only nginx ingress provider is supported.
  21. # # To disable ingress controller, set `provider: none`
  22. # # To enable ingress on specific nodes, use the node_selector, eg:
  23. # provider: nginx
  24. # node_selector:
  25. # app: ingress
  26. #
  27. ingress:
  28. provider: nginx
  29. kubernetes_version: v1.15.6-rancher1-2
  30. monitoring:
  31. provider: metrics-server
  32. #
  33. # If you are using calico on AWS
  34. #
  35. # network:
  36. # plugin: calico
  37. # calico_network_provider:
  38. # cloud_provider: aws
  39. #
  40. # # To specify flannel interface
  41. #
  42. # network:
  43. # plugin: flannel
  44. # flannel_network_provider:
  45. # iface: eth1
  46. #
  47. # # To specify flannel interface for canal plugin
  48. #
  49. # network:
  50. # plugin: canal
  51. # canal_network_provider:
  52. # iface: eth1
  53. #
  54. network:
  55. options:
  56. flannel_backend_type: vxlan
  57. plugin: canal
  58. #
  59. # services:
  60. # kube-api:
  61. # service_cluster_ip_range: 10.43.0.0/16
  62. # kube-controller:
  63. # cluster_cidr: 10.42.0.0/16
  64. # service_cluster_ip_range: 10.43.0.0/16
  65. # kubelet:
  66. # cluster_domain: cluster.local
  67. # cluster_dns_server: 10.43.0.10
  68. #
  69. services:
  70. etcd:
  71. backup_config:
  72. enabled: true
  73. interval_hours: 12
  74. retention: 6
  75. safe_timestamp: false
  76. creation: 12h
  77. extra_args:
  78. election-timeout: 5000
  79. heartbeat-interval: 500
  80. gid: 1000
  81. retention: 72h
  82. snapshot: false
  83. uid: 1000
  84. kube_api:
  85. always_pull_images: true
  86. pod_security_policy: true
  87. service_node_port_range: 30000-32767
  88. event_rate_limit:
  89. enabled: true
  90. audit_log:
  91. enabled: true
  92. secrets_encryption_config:
  93. enabled: true
  94. extra_args:
  95. anonymous-auth: "false"
  96. enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy"
  97. profiling: "false"
  98. service-account-lookup: "true"
  99. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  100. extra_binds:
  101. - "/opt/kubernetes:/opt/kubernetes"
  102. kubelet:
  103. generate_serving_certificate: true
  104. extra_args:
  105. feature-gates: "RotateKubeletServerCertificate=true"
  106. protect-kernel-defaults: "true"
  107. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  108. kube-controller:
  109. extra_args:
  110. profiling: "false"
  111. address: "127.0.0.1"
  112. terminated-pod-gc-threshold: "1000"
  113. feature-gates: "RotateKubeletServerCertificate=true"
  114. scheduler:
  115. extra_args:
  116. profiling: "false"
  117. address: "127.0.0.1"
  118. ssh_agent_auth: false
  119. windows_prefered_cluster: false

{{% /accordion %}}

{{% accordion id=”k8s-1.16” label=”RKE template for k8s 1.16” %}}

  1. #
  2. # Cluster Config
  3. #
  4. default_pod_security_policy_template_id: restricted
  5. docker_root_dir: /var/lib/docker
  6. enable_cluster_alerting: false
  7. enable_cluster_monitoring: false
  8. enable_network_policy: false
  9. local_cluster_auth_endpoint:
  10. enabled: true
  11. #
  12. # Rancher Config
  13. #
  14. rancher_kubernetes_engine_config:
  15. addon_job_timeout: 30
  16. authentication:
  17. strategy: x509
  18. ignore_docker_version: true
  19. #
  20. # # Currently only nginx ingress provider is supported.
  21. # # To disable ingress controller, set `provider: none`
  22. # # To enable ingress on specific nodes, use the node_selector, eg:
  23. # provider: nginx
  24. # node_selector:
  25. # app: ingress
  26. #
  27. ingress:
  28. provider: nginx
  29. kubernetes_version: v1.16.3-rancher1-1
  30. monitoring:
  31. provider: metrics-server
  32. #
  33. # If you are using calico on AWS
  34. #
  35. # network:
  36. # plugin: calico
  37. # calico_network_provider:
  38. # cloud_provider: aws
  39. #
  40. # # To specify flannel interface
  41. #
  42. # network:
  43. # plugin: flannel
  44. # flannel_network_provider:
  45. # iface: eth1
  46. #
  47. # # To specify flannel interface for canal plugin
  48. #
  49. # network:
  50. # plugin: canal
  51. # canal_network_provider:
  52. # iface: eth1
  53. #
  54. network:
  55. options:
  56. flannel_backend_type: vxlan
  57. plugin: canal
  58. #
  59. # services:
  60. # kube-api:
  61. # service_cluster_ip_range: 10.43.0.0/16
  62. # kube-controller:
  63. # cluster_cidr: 10.42.0.0/16
  64. # service_cluster_ip_range: 10.43.0.0/16
  65. # kubelet:
  66. # cluster_domain: cluster.local
  67. # cluster_dns_server: 10.43.0.10
  68. #
  69. services:
  70. etcd:
  71. backup_config:
  72. enabled: true
  73. interval_hours: 12
  74. retention: 6
  75. safe_timestamp: false
  76. creation: 12h
  77. extra_args:
  78. election-timeout: 5000
  79. heartbeat-interval: 500
  80. gid: 1000
  81. retention: 72h
  82. snapshot: false
  83. uid: 1000
  84. kube_api:
  85. always_pull_images: true
  86. pod_security_policy: true
  87. service_node_port_range: 30000-32767
  88. event_rate_limit:
  89. enabled: true
  90. audit_log:
  91. enabled: true
  92. secrets_encryption_config:
  93. enabled: true
  94. extra_args:
  95. anonymous-auth: "false"
  96. enable-admission-plugins: "ServiceAccount,NamespaceLifecycle,LimitRanger,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,AlwaysPullImages,DenyEscalatingExec,NodeRestriction,EventRateLimit,PodSecurityPolicy"
  97. profiling: "false"
  98. service-account-lookup: "true"
  99. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  100. extra_binds:
  101. - "/opt/kubernetes:/opt/kubernetes"
  102. kubelet:
  103. generate_serving_certificate: true
  104. extra_args:
  105. feature-gates: "RotateKubeletServerCertificate=true"
  106. protect-kernel-defaults: "true"
  107. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  108. kube-controller:
  109. extra_args:
  110. profiling: "false"
  111. address: "127.0.0.1"
  112. terminated-pod-gc-threshold: "1000"
  113. feature-gates: "RotateKubeletServerCertificate=true"
  114. scheduler:
  115. extra_args:
  116. profiling: "false"
  117. address: "127.0.0.1"
  118. ssh_agent_auth: false
  119. windows_prefered_cluster: false

{{% /accordion %}}