Using Kubernetes constructs in policy

This section covers Kubernetes specific network policy aspects.

Namespaces

Namespaces are used to create virtual clusters within a Kubernetes cluster. All Kubernetes objects including NetworkPolicy and CiliumNetworkPolicy belong to a particular namespace. Depending on how a policy is being defined and created, Kubernetes namespaces are automatically being taken into account:

  • Network policies created and imported as CiliumNetworkPolicy CRD and NetworkPolicy apply within the namespace, i.e. the policy only applies to pods within that namespace. It is however possible to grant access to and from pods in other namespaces as described below.
  • Network policies imported directly via the API Reference apply to all namespaces unless a namespace selector is specified as described below.

Note

While specification of the namespace via the label k8s:io.kubernetes.pod.namespace in the fromEndpoints and toEndpoints fields is deliberately supported. Specification of the namespace in the endpointSelector is prohibited as it would violate the namespace isolation principle of Kubernetes. The endpointSelector always applies to pods of the namespace which is associated with the CiliumNetworkPolicy resource itself.

Example: Enforce namespace boundaries

This example demonstrates how to enforce Kubernetes namespace-based boundaries for the namespaces ns1 and ns2 by enabling default-deny on all pods of either namespace and then allowing communication from all pods within the same namespace.

Note

The example locks down ingress of the pods in ns1 and ns2. This means that the pods can still communicate egress to anywhere unless the destination is in either ns1 or ns2 in which case both source and destination have to be in the same namespace. In order to enforce namespace boundaries at egress, the same example can be used by specifying the rules at egress in addition to ingress.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "isolate-ns1"
  5. namespace: ns1
  6. spec:
  7. endpointSelector:
  8. matchLabels:
  9. {}
  10. ingress:
  11. - fromEndpoints:
  12. - matchLabels:
  13. {}
  14. ---
  15. apiVersion: "cilium.io/v2"
  16. kind: CiliumNetworkPolicy
  17. metadata:
  18. name: "isolate-ns1"
  19. namespace: ns2
  20. spec:
  21. endpointSelector:
  22. matchLabels:
  23. {}
  24. ingress:
  25. - fromEndpoints:
  26. - matchLabels:
  27. {}
  1. [
  2. {
  3. "ingress" : [
  4. {
  5. "fromEndpoints" : [
  6. {
  7. "matchLabels" : {
  8. "k8s:io.kubernetes.pod.namespace" : "ns1"
  9. }
  10. }
  11. ]
  12. }
  13. ],
  14. "endpointSelector" : {
  15. "matchLabels" : {
  16. "k8s:io.kubernetes.pod.namespace" : "ns1"
  17. }
  18. }
  19. },
  20. {
  21. "endpointSelector" : {
  22. "matchLabels" : {
  23. "k8s:io.kubernetes.pod.namespace" : "ns2"
  24. }
  25. },
  26. "ingress" : [
  27. {
  28. "fromEndpoints" : [
  29. {
  30. "matchLabels" : {
  31. "k8s:io.kubernetes.pod.namespace" : "ns2"
  32. }
  33. }
  34. ]
  35. }
  36. ]
  37. }
  38. ]

Example: Expose pods across namespaces

The following example exposes all pods with the label name=leia in the namespace ns1 to all pods with the label name=luke in the namespace ns2.

Refer to the example YAML files for a fully functional example including pods deployed to different namespaces.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "k8s-expose-across-namespace"
  5. namespace: ns1
  6. spec:
  7. endpointSelector:
  8. matchLabels:
  9. name: leia
  10. ingress:
  11. - fromEndpoints:
  12. - matchLabels:
  13. k8s:io.kubernetes.pod.namespace: ns2
  14. name: luke
  1. [{
  2. "labels": [{"key": "name", "value": "k8s-svc-account"}],
  3. "endpointSelector": {
  4. "matchLabels": {"name":"leia", "k8s:io.kubernetes.pod.namespace":"ns1"}
  5. },
  6. "ingress": [{
  7. "fromEndpoints": [{
  8. "matchLabels":{"name": "luke", "k8s:io.kubernetes.pod.namespace":"ns2"}
  9. }]
  10. }]
  11. }]

Example: Allow egress to kube-dns in kube-system namespace

The following example allows all pods in the public namespace in which the policy is created to communicate with kube-dns on port 53/UDP in the kube-system namespace.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "allow-to-kubedns"
  5. namespace: public
  6. spec:
  7. endpointSelector:
  8. {}
  9. egress:
  10. - toEndpoints:
  11. - matchLabels:
  12. k8s:io.kubernetes.pod.namespace: kube-system
  13. k8s-app: kube-dns
  14. toPorts:
  15. - ports:
  16. - port: '53'
  17. protocol: UDP
  1. [
  2. {
  3. "endpointSelector" : {
  4. "matchLabels": {
  5. "k8s:io.kubernetes.pod.namespace": "public"
  6. }
  7. },
  8. "egress" : [
  9. {
  10. "toEndpoints" : [
  11. {
  12. "matchLabels" : {
  13. "k8s:io.kubernetes.pod.namespace" : "kube-system",
  14. "k8s-app" : "kube-dns"
  15. }
  16. }
  17. ],
  18. "toPorts" : [
  19. {
  20. "ports" : [
  21. {
  22. "port" : "53",
  23. "protocol" : "UDP"
  24. }
  25. ]
  26. }
  27. ]
  28. }
  29. ]
  30. }
  31. ]

ServiceAccounts

Kubernetes Service Accounts are used to associate an identity to a pod or process managed by Kubernetes and grant identities access to Kubernetes resources and secrets. Cilium supports the specification of network security policies based on the service account identity of a pod.

The service account of a pod is either defined via the service account admission controller or can be directly specified in the Pod, Deployment, ReplicationController resource like this:

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: my-pod
  5. spec:
  6. serviceAccountName: leia
  7. ...

Example

The following example grants any pod running under the service account of “luke” to issue a HTTP GET /public request on TCP port 80 to all pods running associated to the service account of “leia”.

Refer to the example YAML files for a fully functional example including deployment and service account resources.

k8s YAML

JSON

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "k8s-svc-account"
  5. spec:
  6. endpointSelector:
  7. matchLabels:
  8. io.cilium.k8s.policy.serviceaccount: leia
  9. ingress:
  10. - fromEndpoints:
  11. - matchLabels:
  12. io.cilium.k8s.policy.serviceaccount: luke
  13. toPorts:
  14. - ports:
  15. - port: '80'
  16. protocol: TCP
  17. rules:
  18. http:
  19. - method: GET
  20. path: "/public$"
  1. [{
  2. "labels": [{"key": "name", "value": "k8s-svc-account"}],
  3. "endpointSelector": {"matchLabels": {"io.cilium.k8s.policy.serviceaccount":"leia"}},
  4. "ingress": [{
  5. "fromEndpoints": [
  6. {"matchLabels":{"io.cilium.k8s.policy.serviceaccount":"luke"}}
  7. ],
  8. "toPorts": [{
  9. "ports": [
  10. {"port": "80", "protocol": "TCP"}
  11. ],
  12. "rules": {
  13. "http": [
  14. {
  15. "method": "GET",
  16. "path": "/public$"
  17. }
  18. ]
  19. }
  20. }]
  21. }]
  22. }]

Multi-Cluster

When operating multiple cluster with cluster mesh, the cluster name is exposed via the label io.cilium.k8s.policy.cluster and can be used to restrict policies to a particular cluster.

k8s YAML

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumNetworkPolicy
  3. metadata:
  4. name: "allow-cross-cluster"
  5. description: "Allow x-wing in cluster1 to contact rebel-base in cluster2"
  6. spec:
  7. endpointSelector:
  8. matchLabels:
  9. name: x-wing
  10. io.cilium.k8s.policy.cluster: cluster1
  11. egress:
  12. - toEndpoints:
  13. - matchLabels:
  14. name: rebel-base
  15. io.cilium.k8s.policy.cluster: cluster2

Clusterwide Policies

CiliumNetworkPolicy only allows to bind a policy restricted to a particular namespace. There can be situations where one wants to have a cluster-scoped effect of the policy, which can be done using Cilium’s CiliumClusterwideNetworkPolicy Kubernetes custom resource. The specification of the policy is same as that of CiliumNetworkPolicy except that it is not namespaced.

In the cluster, this policy will allow ingress traffic from pods matching the label name=luke from any namespace to pods matching the labels name=leia in any namespace.

k8s YAML

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumClusterwideNetworkPolicy
  3. metadata:
  4. name: "clusterwide-policy-example"
  5. spec:
  6. description: "Policy for selective ingress allow to a pod from only a pod with given label"
  7. endpointSelector:
  8. matchLabels:
  9. name: leia
  10. ingress:
  11. - fromEndpoints:
  12. - matchLabels:
  13. name: luke

Example: Allow all ingress to kube-dns

The following example allows all Cilium managed endpoints in the cluster to communicate with kube-dns on port 53/UDP in the kube-system namespace.

k8s YAML

  1. apiVersion: "cilium.io/v2"
  2. kind: CiliumClusterwideNetworkPolicy
  3. metadata:
  4. name: "wildcard-from-endpoints"
  5. spec:
  6. description: "Policy for ingress allow to kube-dns from all Cilium managed endpoints in the cluster"
  7. endpointSelector:
  8. matchLabels:
  9. k8s:io.kubernetes.pod.namespace: kube-system
  10. k8s-app: kube-dns
  11. ingress:
  12. - fromEndpoints:
  13. - {}
  14. toPorts:
  15. - ports:
  16. - port: "53"
  17. protocol: UDP