Protect Kubernetes nodes

Big picture

Secure Kubernetes nodes with host endpoints managed by Calico.

Value

Calico can automatically create host endpoints for your Kubernetes nodes. This means Calico can manage the lifecycle of host endpoints as your cluster evolves, ensuring nodes are always protected by policy.

Features

This how-to guide uses the following Calico features:

  • HostEndpoint
  • KubeControllersConfiguration
  • GlobalNetworkPolicy

Concepts

Host endpoints

Each host has one or more network interfaces that it uses to communicate externally. You can represent these interfaces in Calico using host endpoints and then use network policy to secure them.

Calico host endpoints can have labels, and they work the same as labels on workload endpoints. The network policy rules can apply to both workload and host endpoints using label selectors.

Automatic host endpoints secure all of the host’s interfaces (i.e. in Linux, all the interfaces in the host network namespace). They are created by setting interfaceName: "*".

Automatic host endpoints

Calico creates a wildcard host endpoint for each node, with the host endpoint containing the same labels and IP addresses as its corresponding node. Calico will ensure these managed host endpoints maintain the same labels and IP addresses as its node by periodic syncs. This means that policy targeting these automatic host endpoints will function correctly with the policy put in place to select those nodes, even if over time the node’s IPs or labels change.

Automatic host endpoints are differentiated from other host endpoints by the label projectcalico.org/created-by: calico-kube-controllers. Enable or disable automatic host endpoints by configuring the default KubeControllersConfiguration resource.

Before you begin…

Have a running Calico cluster with calicoctl installed.

How to

Enable automatic host endpoints

To enable automatic host endpoints, edit the default KubeControllersConfiguration instance, and set spec.controllers.node.hostEndpoint.autoCreate to true:

  1. calicoctl patch kubecontrollersconfiguration default --patch='{"spec": {"controllers": {"node": {"hostEndpoint": {"autoCreate": "Enabled"}}}}}'

If successful, host endpoints are created for each of your cluster’s nodes:

  1. calicoctl get heps -owide

The output may look similar to this:

  1. calicoctl get heps -owide
  2. NAME NODE INTERFACE IPS PROFILES
  3. ip-172-16-101-147.us-west-2.compute.internal-auto-hep ip-172-16-101-147.us-west-2.compute.internal * 172.16.101.147,192.168.228.128 projectcalico-default-allow
  4. ip-172-16-101-54.us-west-2.compute.internal-auto-hep ip-172-16-101-54.us-west-2.compute.internal * 172.16.101.54,192.168.107.128 projectcalico-default-allow
  5. ip-172-16-101-79.us-west-2.compute.internal-auto-hep ip-172-16-101-79.us-west-2.compute.internal * 172.16.101.79,192.168.91.64 projectcalico-default-allow
  6. ip-172-16-101-9.us-west-2.compute.internal-auto-hep ip-172-16-101-9.us-west-2.compute.internal * 172.16.101.9,192.168.71.192 projectcalico-default-allow
  7. ip-172-16-102-63.us-west-2.compute.internal-auto-hep ip-172-16-102-63.us-west-2.compute.internal * 172.16.102.63,192.168.108.192 projectcalico-default-allow

Apply network policy to automatic host endpoints

To apply policy that targets all Kubernetes nodes, first add a label to the nodes. The label will be synced to their automatic host endpoints.

For example, to add the label kubernetes-host to all nodes and their host endpoints:

  1. kubectl label nodes --all kubernetes-host=

And an example policy snippet:

  1. apiVersion: projectcalico.org/v3
  2. kind: GlobalNetworkPolicy
  3. metadata:
  4. name: all-nodes-policy
  5. spec:
  6. selector: has(kubernetes-host)
  7. #<rest of the policy>

To select a specific set of host endpoints (and their corresponding Kubernetes nodes), use a policy selector that selects a label unique to that set of host endpoints. For example, if we want to add the label environment=dev to nodes named node1 and node2:

  1. kubectl label node node1 environment=dev
  2. kubectl label node node2 environment=dev

With the labels in place and automatic host endpoints enabled, host endpoints for node1 and node2 will be updated with the environment=dev label. We can write policy to select that set of nodes with a combination of selectors:

  1. apiVersion: projectcalico.org/v3
  2. kind: GlobalNetworkPolicy
  3. metadata:
  4. name: some-nodes-policy
  5. spec:
  6. selector: has(kubernetes-host) && environment == 'dev'
  7. #<rest of the policy>

Tutorial

This tutorial will lock down Kubernetes node ingress to only allow SSH and required ports for Kubernetes to function. We will apply two policies: one for the control plane nodes. and one for the worker nodes.

Protect Kubernetes nodes - 图1note

Note: This tutorial was tested on a cluster created with kubeadm v1.18.2 on AWS, using a “stacked etcd” topology. Stacked etcd topology means the etcd pods are running on the masters. kubeadm uses stacked etcd by default. If your Kubernetes cluster is on a different platform, is running a variant of Kubernetes, or is running a topology with an external etcd cluster, please review the required ports for control plane and worker nodes in your cluster and adjust the policies in this tutorial as needed.

First, let’s restrict ingress traffic to the control plane nodes. The ingress policy below contains three rules. The first rule allows access to the API server port from anywhere. The second rule allows all traffic to localhost, which allows Kubernetes to access control plane processes. These control plane processes includes the etcd server client API, the scheduler, and the controller-manager. This rule also allows localhost access to the kubelet API and calico/node health checks. And the final rule allows the etcd pods to peer with each other and allows the masters to access each others kubelet API.

If you have not modified the failsafe ports, you should still have SSH access to the nodes after applying this policy. Now apply the ingress policy for the Kubernetes masters:

  1. calicoctl apply -f - << EOF
  2. apiVersion: projectcalico.org/v3
  3. kind: GlobalNetworkPolicy
  4. metadata:
  5. name: ingress-k8s-masters
  6. spec:
  7. selector: has(node-role.kubernetes.io/master)
  8. # This rule allows ingress to the Kubernetes API server.
  9. ingress:
  10. - action: Allow
  11. protocol: TCP
  12. destination:
  13. ports:
  14. # kube API server
  15. - 6443
  16. # This rule allows all traffic to localhost.
  17. - action: Allow
  18. destination:
  19. nets:
  20. - 127.0.0.0/8
  21. # This rule is required in multi-master clusters where etcd pods are colocated with the masters.
  22. # Allow the etcd pods on the masters to communicate with each other. 2380 is the etcd peer port.
  23. # This rule also allows the masters to access the kubelet API on other masters (including itself).
  24. - action: Allow
  25. protocol: TCP
  26. source:
  27. selector: has(node-role.kubernetes.io/master)
  28. destination:
  29. ports:
  30. - 2380
  31. - 10250
  32. EOF

Note that the above policy selects the standard node-role.kubernetes.io/master label that kubeadm sets on control plane nodes.

Next, we need to apply policy to restrict ingress to the Kubernetes workers. Before adding the policy we will add a label to all of our worker nodes, which then gets added to its automatic host endpoint. For this tutorial we will use kubernetes-worker. An example command to add the label to worker nodes:

  1. kubectl get node -l '!node-role.kubernetes.io/master' -o custom-columns=NAME:.metadata.name | tail -n +2 | xargs -I{} kubectl label node {} kubernetes-worker=

The workers’ ingress policy consists of two rules. The first rule allows all traffic to localhost. As with the masters, the worker nodes need to access their localhost kubelet API and calico/node healthcheck. The second rule allows the masters to access the workers kubelet API. Now apply the policy:

  1. calicoctl apply -f - << EOF
  2. apiVersion: projectcalico.org/v3
  3. kind: GlobalNetworkPolicy
  4. metadata:
  5. name: ingress-k8s-workers
  6. spec:
  7. selector: has(kubernetes-worker)
  8. # Allow all traffic to localhost.
  9. ingress:
  10. - action: Allow
  11. destination:
  12. nets:
  13. - 127.0.0.0/8
  14. # Allow only the masters access to the nodes kubelet API.
  15. - action: Allow
  16. protocol: TCP
  17. source:
  18. selector: has(node-role.kubernetes.io/master)
  19. destination:
  20. ports:
  21. - 10250
  22. EOF

Additional resources