Creating policies from verdicts

Policy Audit Mode configures Cilium to allow all traffic while logging all connections that would otherwise be dropped by policy. Policy Audit Mode may be configured for the entire daemon using --policy-audit-mode=true or for individual Cilium Endpoints. When Policy Audit Mode is enabled, no network policy is enforced so this setting is not recommended for production deployment. Policy Audit Mode supports auditing network policies implemented at networks layers 3 and 4. This guide walks through the process of creating policies using Policy Audit Mode.

If you haven’t read the Introduction to Cilium & Hubble yet, we’d encourage you to do that first.

The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.

Setup Cilium

If you have not set up Cilium yet, follow the guide Quick Installation for instructions on how to quickly bootstrap a Kubernetes cluster and install Cilium. If in doubt, pick the minikube route, you will be good to go in less than 5 minutes.

Deploy the Demo Application

Now that we have Cilium deployed and kube-dns operating correctly we can deploy our demo application.

In our Star Wars-inspired example, there are three microservices applications: deathstar, tiefighter, and xwing. The deathstar runs an HTTP webservice on port 80, which is exposed as a Kubernetes Service to load-balance requests to deathstar across two pod replicas. The deathstar service provides landing services to the empire’s spaceships so that they can request a landing port. The tiefighter pod represents a landing-request client service on a typical empire ship and xwing represents a similar service on an alliance ship. They exist so that we can test different security policies for access control to deathstar landing services.

Application Topology for Cilium and Kubernetes

../../_images/cilium_http_gsg.png

The file http-sw-app.yaml contains a Kubernetes Deployment for each of the three services. Each deployment is identified using the Kubernetes labels (org=empire, class=deathstar), (org=empire, class=tiefighter), and (org=alliance, class=xwing). It also includes a deathstar-service, which load-balances traffic to all pods with label (org=empire, class=deathstar).

  1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.12/examples/minikube/http-sw-app.yaml
  2. service/deathstar created
  3. deployment.extensions/deathstar created
  4. pod/tiefighter created
  5. pod/xwing created

Kubernetes will deploy the pods and service in the background. Running kubectl get pods,svc will inform you about the progress of the operation. Each pod will go through several states until it reaches Running at which point the pod is ready.

  1. $ kubectl get pods,svc
  2. NAME READY STATUS RESTARTS AGE
  3. pod/deathstar-6fb5694d48-5hmds 1/1 Running 0 107s
  4. pod/deathstar-6fb5694d48-fhf65 1/1 Running 0 107s
  5. pod/tiefighter 1/1 Running 0 107s
  6. pod/xwing 1/1 Running 0 107s
  7. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  8. service/deathstar ClusterIP 10.96.110.8 <none> 80/TCP 107s
  9. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m53s

Each pod will be represented in Cilium as an Endpoint. We can invoke the cilium tool inside the Cilium pod to list them:

  1. $ kubectl -n kube-system get pods -l k8s-app=cilium
  2. NAME READY STATUS RESTARTS AGE
  3. cilium-5ngzd 1/1 Running 0 3m19s
  4. $ kubectl -n kube-system exec cilium-1c2cz -- cilium endpoint list
  5. ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
  6. ENFORCEMENT ENFORCEMENT
  7. 232 Disabled Disabled 16530 k8s:class=deathstar 10.0.0.147 ready
  8. k8s:io.cilium.k8s.policy.cluster=default
  9. k8s:io.cilium.k8s.policy.serviceaccount=default
  10. k8s:io.kubernetes.pod.namespace=default
  11. k8s:org=empire
  12. 726 Disabled Disabled 1 reserved:host ready
  13. 883 Disabled Disabled 4 reserved:health 10.0.0.244 ready
  14. 1634 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.118 ready
  15. k8s:io.cilium.k8s.policy.serviceaccount=coredns
  16. k8s:io.kubernetes.pod.namespace=kube-system
  17. k8s:k8s-app=kube-dns
  18. 1673 Disabled Disabled 31028 k8s:class=tiefighter 10.0.0.112 ready
  19. k8s:io.cilium.k8s.policy.cluster=default
  20. k8s:io.cilium.k8s.policy.serviceaccount=default
  21. k8s:io.kubernetes.pod.namespace=default
  22. k8s:org=empire
  23. 2811 Disabled Disabled 51373 k8s:io.cilium.k8s.policy.cluster=default 10.0.0.47 ready
  24. k8s:io.cilium.k8s.policy.serviceaccount=coredns
  25. k8s:io.kubernetes.pod.namespace=kube-system
  26. k8s:k8s-app=kube-dns
  27. 2843 Disabled Disabled 16530 k8s:class=deathstar 10.0.0.89 ready
  28. k8s:io.cilium.k8s.policy.cluster=default
  29. k8s:io.cilium.k8s.policy.serviceaccount=default
  30. k8s:io.kubernetes.pod.namespace=default
  31. k8s:org=empire
  32. 3184 Disabled Disabled 22654 k8s:class=xwing 10.0.0.30 ready
  33. k8s:io.cilium.k8s.policy.cluster=default
  34. k8s:io.cilium.k8s.policy.serviceaccount=default
  35. k8s:io.kubernetes.pod.namespace=default
  36. k8s:org=alliance

Both ingress and egress policy enforcement is still disabled on all of these pods because no network policy has been imported yet which select any of the pods.

Enable Policy Audit Mode (Entire Daemon)

To observe policy audit messages for all endpoints managed by this daemonset, modify the Cilium configmap and restart all daemons:

Configure via kubectlHelm Upgrade

  1. $ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{"data":{"policy-audit-mode":"true"}}'
  2. configmap/cilium-config patched
  3. $ kubectl -n $CILIUM_NAMESPACE rollout restart ds/cilium
  4. daemonset.apps/cilium restarted
  5. $ kubectl -n $CILIUM_NAMESPACE rollout status ds/cilium
  6. Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available...
  7. daemon set "cilium" successfully rolled out

If you installed Cilium via helm install, then you can use helm upgrade to enable Policy Audit Mode:

  1. $ helm upgrade cilium cilium/cilium --version 1.12.0 \
  2. --namespace $CILIUM_NAMESPACE \
  3. --reuse-values \
  4. --set policyAuditMode=true

Enable Policy Audit Mode (Specific Endpoint)

Cilium can enable Policy Audit Mode for a specific endpoint. This may be helpful when enabling Policy Audit Mode for the entire daemon is too broad. Enabling per endpoint will ensure other endpoints managed by the same daemon are not impacted.

This approach is meant to be temporary. Restarting Cilium pod will reset the Policy Audit Mode to match the daemon’s configuration.

Policy Audit Mode is enabled for a given endpoint by modifying the endpoint configuration via the cilium tool on the endpoint’s Kubernetes node. The steps include:

  1. Determine the endpoint id on which Policy Audit Mode will be enabled.

  2. Identify the Cilium pod running on the same Kubernetes node corresponding to the endpoint.

  3. Using the Cilium pod above, modify the endpoint config by setting PolicyAuditMode=Enabled.

The following shell commands perform these steps:

  1. $ export PODNAME=deathstar
  2. $ export NODENAME=$(kubectl get pod -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].spec.nodeName}")
  3. $ export ENDPOINT=$(kubectl get cep -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].status.id}")
  4. $ export CILIUM_POD=$(kubectl -n "$CILIUM_NAMESPACE" get pod --all-namespaces --field-selector spec.nodeName="$NODENAME" -lk8s-app=cilium -o jsonpath='{.items[*].metadata.name}')
  5. $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- cilium endpoint config "$ENDPOINT" PolicyAuditMode=Enabled
  6. Endpoint 232 configuration updated successfully

Observe policy verdicts

In this example, we are tasked with applying security policy for the deathstar. First, from the Cilium pod we need to monitor the notifications for policy verdicts using cilium monitor -t policy-verdict. We’ll be monitoring for inbound traffic towards the deathstar to identify that traffic and determine whether to extend the network policy to allow that traffic.

Apply a default-deny policy:

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "empire-default-deny"
  5. spec:
  6. description: "Default-deny ingress policy for the empire"
  7. endpointSelector:
  8. matchLabels:
  9. org: empire
  10. ingress:
  11. - {}

CiliumNetworkPolicies match on pod labels using an endpointSelector to identify the sources and destinations to which the policy applies. The above policy denies traffic sent to any pods with label (org=empire). Due to the Policy Audit Mode enabled above (either for the entire daemon, or for just the deathstar endpoint), the traffic will not actually be denied but will instead trigger policy verdict notifications.

To apply this policy, run:

  1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.12/examples/minikube/sw_deny_policy.yaml
  2. ciliumnetworkpolicy.cilium.io/empire-default-deny created

With the above policy, we will enable default-deny posture on ingress to pods with the label org=empire and enable the policy verdict notifications for those pods. The same principle applies on egress as well.

From another terminal with kubectl access, send some traffic from the tiefighter to the deathstar:

  1. $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
  2. Ship landed

Back in the Cilium pod, the policy verdict logs are printed in the monitor output:

  1. # cilium monitor -t policy-verdict
  2. ...
  3. Policy verdict log: flow 0x63113709 local EP ID 232, remote ID 31028, proto 6, ingress, action audit, match none, 10.0.0.112 :54134 -> 10.29.50.40:80 tcp SYN

In the above example, we can see that endpoint 232 has received traffic (ingress true) which doesn’t match the policy (action audit match none). The source of this traffic has the identity 31028. Let’s gather a bit more information about what these numbers mean:

  1. # cilium endpoint list
  2. ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
  3. ENFORCEMENT ENFORCEMENT
  4. 232 Disabled (Audit) Disabled 16530 k8s:class=deathstar 10.29.50.40 ready
  5. k8s:io.cilium.k8s.policy.cluster=default
  6. k8s:io.cilium.k8s.policy.serviceaccount=default
  7. k8s:io.kubernetes.pod.namespace=default
  8. k8s:org=empire
  9. ...
  10. # cilium identity get 31028
  11. ID LABELS
  12. 31028 k8s:class=tiefighter
  13. k8s:io.cilium.k8s.policy.cluster=default
  14. k8s:io.cilium.k8s.policy.serviceaccount=default
  15. k8s:io.kubernetes.pod.namespace=default
  16. k8s:org=empire

Create the Network Policy

Given the above information, we now know the labels of the target pod, the labels of the peer that’s attempting to connect, the direction of the traffic and the port. In this case, we can see clearly that it’s an empire craft so once we’ve determined that we expect this traffic to arrive at the deathstar, we can form a policy to match the traffic:

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "rule1"
  5. spec:
  6. description: "L3-L4 policy to restrict deathstar access to empire ships only"
  7. endpointSelector:
  8. matchLabels:
  9. org: empire
  10. class: deathstar
  11. ingress:
  12. - fromEndpoints:
  13. - matchLabels:
  14. org: empire
  15. toPorts:
  16. - ports:
  17. - port: "80"
  18. protocol: TCP

To apply this L3/L4 policy, run:

  1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.12/examples/minikube/sw_l3_l4_policy.yaml
  2. ciliumnetworkpolicy.cilium.io/rule1 created

Now if we run the landing requests again, we can observe in the monitor output that the traffic which was previously audited to be dropped by the policy are now reported as allowed:

  1. $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
  2. Ship landed

Executed from the cilium pod:

  1. # cilium monitor -t policy-verdict
  2. Policy verdict log: flow 0xabf3bda6 local EP ID 232, remote ID 31028, proto 6, ingress, action allow, match L3-L4, 10.0.0.112 :59824 -> 10.0.0.147:80 tcp SYN

Now the policy verdict states that the traffic would be allowed: action allow. Success!

Disable Policy Audit Mode (Entire Daemon)

These steps should be repeated for each connection in the cluster to ensure that the network policy allows all of the expected traffic. The final step after deploying the policy is to disable Policy Audit Mode again:

Configure via kubectlHelm Upgrade

  1. $ kubectl patch -n $CILIUM_NAMESPACE configmap cilium-config --type merge --patch '{"data":{"policy-audit-mode":"false"}}'
  2. configmap/cilium-config patched
  3. $ kubectl -n $CILIUM_NAMESPACE rollout restart ds/cilium
  4. daemonset.apps/cilium restarted
  5. $ kubectl -n kube-system rollout status ds/cilium
  6. Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available...
  7. daemon set "cilium" successfully rolled out
  1. $ helm upgrade cilium cilium/cilium --version 1.12.0 \
  2. --namespace $CILIUM_NAMESPACE \
  3. --reuse-values \
  4. --set policyAuditMode=false

Disable Policy Audit Mode (Specific Endpoint)

These steps are nearly identical to enabling Policy Audit Mode.

  1. $ export PODNAME=deathstar
  2. $ export NODENAME=$(kubectl get pod -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].spec.nodeName}")
  3. $ export ENDPOINT=$(kubectl get cep -o jsonpath="{.items[?(@.metadata.name=='$PODNAME')].status.id}")
  4. $ export CILIUM_POD=$(kubectl -n "$CILIUM_NAMESPACE" get pod --all-namespaces --field-selector spec.nodeName="$NODENAME" -lk8s-app=cilium -o jsonpath='{.items[*].metadata.name}')
  5. $ kubectl -n "$CILIUM_NAMESPACE" exec "$CILIUM_POD" -c cilium-agent -- cilium endpoint config "$ENDPOINT" PolicyAuditMode=Disabled
  6. Endpoint 232 configuration updated successfully

Alternatively, restarting the Cilium pod will set the endpoint Policy Audit Mode to the daemon set configuration.

Verify Policy Audit Mode Is Disabled

Now if we run the landing requests again, only the tiefighter pods with the label org=empire will succeed. The xwing pods will be blocked!

  1. $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
  2. Ship landed

This works as expected. Now the same request run from an xwing pod will fail:

  1. $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

This request will hang, so press Control-C to kill the curl request, or wait for it to time out.

We hope you enjoyed the tutorial. Feel free to play more with the setup, follow the Identity-Aware and HTTP-Aware Policy Enforcement guide, and reach out to us on the Cilium Slack channel with any questions!

Clean-up

  1. $ kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/v1.12/examples/minikube/http-sw-app.yaml
  2. $ kubectl delete cnp rule1