Host Firewall (beta)

This document serves as an introduction to Cilium’s host firewall, to enforce security policies for Kubernetes nodes.

Note

This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems.

Enable the Host Firewall in Cilium

Note

First, make sure you have Helm 3 installed.

If you have (or planning to have) Helm 2 charts (and Tiller) in the same cluster, there should be no issue as both version are mutually compatible in order to support gradual migration. Cilium chart is targeting Helm 3 (v3.0.3 and above).

Setup Helm repository:

  1. helm repo add cilium https://helm.cilium.io/

Deploy Cilium release via Helm:

  1. helm install cilium cilium/cilium --version 1.8.10 \
  2. --namespace kube-system \
  3. --set global.hostFirewall=true \
  4. --set global.devices='{ethX,ethY}'

The global.devices flag refers to the network devices Cilium is configured on such as eth0. Omitting this option leads Cilium to auto-detect what interfaces the host firewall applies to.

At this point, the Cilium-managed nodes are ready to enforce network policies.

Note

The host firewall is not compatible with per-endpoint routing. This option is enabled by default on managed services (AKS, EKS, GKE), so in order to use the host firewall on those environments, per-endpoint routing must be disabled. For example, on GKE, replace --set gke.enabled=true with --set ipam.mode=kubernetes --set endpointRoutes.enabled=false --set tunnel=disabled.

See also GitHub issue #13121.

Attach a Label to the Node

In this guide, we will apply host policies only to nodes with the label node-access=ssh. We thus first need to attach that label to a node in the cluster.

  1. $ export NODE_NAME=k8s1
  2. $ kubectl label node $NODE_NAME node-access=ssh
  3. node/k8s1 labeled

Enable Policy Audit Mode for the Host Endpoint

Host Policies enforce access control over connectivity to and from nodes. Particular care must be taken to ensure that when host policies are imported, Cilium does not block access to the nodes or break the cluster’s normal behavior (for example by blocking communication with kube-apiserver).

To avoid such issues, we can switch the host firewall in audit mode, to validate the impact of host policies before enforcing them. When Policy Audit Mode is enabled, no network policy is enforced so this setting is not recommended for production deployment.

  1. $ CILIUM_NAMESPACE=kube-system
  2. $ CILIUM_POD_NAME=$(kubectl -n $CILIUM_NAMESPACE get pods -l "k8s-app=cilium" -o jsonpath="{.items[?(@.spec.nodeName=='$NODE_NAME')].metadata.name}")
  3. $ HOST_EP_ID=$(kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium endpoint list -o jsonpath='{[?(@.status.identity.id==1)].id}')
  4. $ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium endpoint config $HOST_EP_ID PolicyAuditMode=Enabled
  5. Endpoint 3353 configuration updated successfully
  6. $ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium endpoint config $HOST_EP_ID | grep PolicyAuditMode
  7. PolicyAuditMode Enabled

Apply a Host Network Policy

Host Policies match on node labels using a Node Selector to identify the nodes to which the policy applies. The following policy applies to all nodes. It allows communications from outside the cluster only on port TCP/22. All communications from the cluster to the hosts are allowed.

Host policies don’t apply to communications between pods or between pods and the outside of the cluster, except if those pods are host-networking pods.

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumClusterwideNetworkPolicy
  3. description: ""
  4. metadata:
  5. name: "demo-host-policy"
  6. spec:
  7. nodeSelector:
  8. matchLabels:
  9. node-access: ssh
  10. ingress:
  11. - fromEntities:
  12. - cluster
  13. - toPorts:
  14. - ports:
  15. - port: "22"
  16. protocol: TCP

To apply this policy, run:

  1. $ kubectl create -f \ |SCM_WEB|\/examples/policies/host/demo-host-policy.yaml
  2. ciliumclusterwidenetworkpolicy.cilium.io/demo-host-policy created

The host is represented as a special endpoint, with label reserved:host, in the output of command cilium endpoint list. You can therefore inspect the status of the policy using that command.

  1. $ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium endpoint list
  2. ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
  3. ENFORCEMENT ENFORCEMENT
  4. 266 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:ef4e 10.16.172.63 ready
  5. k8s:io.cilium.k8s.policy.serviceaccount=coredns
  6. k8s:io.kubernetes.pod.namespace=kube-system
  7. k8s:k8s-app=kube-dns
  8. 1687 Disabled (Audit) Disabled 1 k8s:node-access=ssh ready
  9. reserved:host
  10. 3362 Disabled Disabled 4 reserved:health f00d::a0b:0:0:49cf 10.16.87.66 ready

Adjust the Host Policy to Your Environment

As long as the host endpoint is running in audit mode, communications disallowed by the policy won’t be dropped. They will however be reported by cilium monitor as action audit. The audit mode thus allows you to adjust the host policy to your environment, to avoid unexpected connection breakages.

  1. $ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium monitor -t policy-verdict --related-to $HOST_EP_ID
  2. Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 1, ingress, action allow, match L3-Only, 192.168.33.12 -> 192.168.33.11 EchoRequest
  3. Policy verdict log: flow 0x0 local EP ID 1687, remote ID 6, proto 6, ingress, action allow, match L3-Only, 192.168.33.12:37278 -> 192.168.33.11:2379 tcp SYN
  4. Policy verdict log: flow 0x0 local EP ID 1687, remote ID 2, proto 6, ingress, action audit, match none, 10.0.2.2:47500 -> 10.0.2.15:6443 tcp SYN

For details on how to derive the network policies from the output of cilium monitor, please refer to Observe policy verdicts and Create the Network Policy in the Creating policies from verdicts guide.

In particular, Entities Based rules are convenient for example to allow communication to entire classes of destinations, such as all remotes nodes (remote-node) or the entire cluster (cluster).

Warning

Make sure that none of the communications required to access the cluster or for the cluster to work properly are denied. They should appear as action allow.

Disable Policy Audit Mode

Once you are confident all required communication to the host from outside the cluster are allowed, you can disable policy audit mode to enforce the host policy.

  1. $ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium endpoint config $HOST_EP_ID PolicyAuditMode=Disabled
  2. Endpoint 3353 configuration updated successfully

Ingress host policies should now appear as enforced:

  1. $ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium endpoint list
  2. ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
  3. ENFORCEMENT ENFORCEMENT
  4. 266 Disabled Disabled 104 k8s:io.cilium.k8s.policy.cluster=default f00d::a0b:0:0:ef4e 10.16.172.63 ready
  5. k8s:io.cilium.k8s.policy.serviceaccount=coredns
  6. k8s:io.kubernetes.pod.namespace=kube-system
  7. k8s:k8s-app=kube-dns
  8. 1687 Enabled Disabled 1 k8s:node-access=ssh ready
  9. reserved:host
  10. 3362 Disabled Disabled 4 reserved:health f00d::a0b:0:0:49cf 10.16.87.66 ready

Communications not explicitly allowed by the host policy will now be dropped:

  1. $ kubectl -n $CILIUM_NAMESPACE exec $CILIUM_POD_NAME -- cilium monitor -t policy-verdict --related-to $HOST_EP_ID
  2. Policy verdict log: flow 0x0 local EP ID 1687, remote ID 2, proto 6, ingress, action deny, match none, 10.0.2.2:49038 -> 10.0.2.15:21 tcp SYN

Clean Up

  1. $ kubectl delete ccnp demo-host-policy
  2. $ kubectl label node $NODE_NAME node-access-