Egress Gateway (beta)

Note

This is a beta feature. Please provide feedback and file a GitHub issue if you experience any problems.

The egress gateway allows users to redirect egress pod traffic through specific, gateway nodes. Packets are masqueraded to the gateway node IP.

This document explains how to enable the egress gateway and configure egress NAT policies to route and SNAT the egress traffic for a specific workload.

Note

This guide assumes that Cilium has been correctly installed in your Kubernetes cluster. Please see Quick Installation for more information. If unsure, run cilium status and validate that Cilium is up and running.

Enable Egress Gateway

The feature is disabled by default. Enable the feature:

Helm

ConfigMap

If you installed Cilium via helm install, you may enable the Egress gateway feature with the following command:

  1. helm upgrade cilium cilium/cilium --version 1.10.2 \
  2. --namespace kube-system \
  3. --reuse-values \
  4. --set egressGateway.enabled=true \
  5. --set bpf.masquerade=true \
  6. --set kubeProxyReplacement=strict

Egress Gateway support can be enabled by setting the following options in the cilium-config ConfigMap:

  1. enable-egress-gateway: true
  2. enable-bpf-masquerade: true
  3. kube-proxy-replacement: strict

Create an External Service (Optional)

This feature will change the default behavior how a packet leaves a cluster. As a result, from the external service’s point of view, it will see different source IP address from the cluster. If you don’t have an external service to experiment with, nginx is a very simple example that can demonstrate the functionality, while nginx’s access log shows which IP address the request is coming from.

Create an nginx service on a Linux node that is external to the existing Kubernetes cluster, and use it as the destination of the egress traffic.

  1. $ # Install and start nginx
  2. $ sudo apt install nginx
  3. $ sudo systemctl start nginx
  4. $ # Make sure the service is started and listens on port :80
  5. $ sudo systemctl status nginx
  6. nginx.service - A high performance web server and a reverse proxy server
  7. Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
  8. Active: active (running) since Sun 2021-04-04 21:58:57 UTC; 1min 3s ago
  9. [...]
  10. $ curl http://192.168.33.13:80 # Assume 192.168.33.13 is the external IP of the node
  11. [...]
  12. <title>Welcome to nginx!</title>
  13. [...]

Create Client Pods

Deploy a client pod that will generate traffic which will be redirected based on the configurations specified in the CiliumEgressNATPolicy.

  1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes-dns/dns-sw-app.yaml
  2. $ kubectl get po
  3. NAME READY STATUS RESTARTS AGE
  4. pod/mediabot 1/1 Running 0 14s
  5. $ kubectl exec mediabot -- curl http://192.168.33.13:80
  6. <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
  7. [...]

Verify access log from nginx node or other external services that the request is coming from one of the node in Kubernetes cluster. For example, in nginx node, the access log will contain something like the following:

  1. $ tail /var/log/nginx/access.log
  2. [...]
  3. 192.168.33.11 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1"

In the previous example, the client pod is running on the node 192.168.33.11, so the result makes sense. This is the default Kubernetes behavior without egress NAT.

Configure Egress IPs

Deploy the following deployment to assign additional egress IP to the gateway node. The node that runs the pod will have additional IP addresses configured on the external interface (enp0s8 as in the example), and become the egress gateway. In the following example, 192.168.33.100 and 192.168.33.101 becomes the egress IP which can be consumed by Egress NAT Policy. Please make sure these IP addresses are routable on the interface they are assigned to, otherwise the return traffic won’t be able to route back.

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: "egress-ip-assign"
  5. labels:
  6. name: "egress-ip-assign"
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. name: "egress-ip-assign"
  12. template:
  13. metadata:
  14. labels:
  15. name: "egress-ip-assign"
  16. spec:
  17. affinity:
  18. # the following pod affinity ensures that the "egress-ip-assign" pod
  19. # runs on the same node as the mediabot pod
  20. podAffinity:
  21. requiredDuringSchedulingIgnoredDuringExecution:
  22. - labelSelector:
  23. matchExpressions:
  24. - key: class
  25. operator: In
  26. values:
  27. - mediabot
  28. - key: org
  29. operator: In
  30. values:
  31. - empire
  32. topologyKey: "kubernetes.io/hostname"
  33. hostNetwork: true
  34. containers:
  35. - name: egress-ip
  36. image: docker.io/library/busybox:1.31.1
  37. command: ["/bin/sh","-c"]
  38. securityContext:
  39. privileged: true
  40. env:
  41. - name: EGRESS_IPS
  42. value: "192.168.33.100/24 192.168.33.101/24"
  43. args:
  44. - "for i in $EGRESS_IPS; do ip address add $i dev enp0s8; done; sleep 10000000"
  45. lifecycle:
  46. preStop:
  47. exec:
  48. command:
  49. - "/bin/sh"
  50. - "-c"
  51. - "for i in $EGRESS_IPS; do ip address del $i dev enp0s8; done"

Create Egress NAT Policy

Apply the following Egress NAT Policy, which basically means: when the pod is running in the namespace default and the pod itself has label org: empire and class: mediabot, if it’s trying to talk to IP CIDR 192.168.33.13/32, then use egress IP 192.168.33.100. In this example, it tells Cilium to forward the packet from client pod to the gateway node with egress IP 192.168.33.100, and masquerade with that IP address.

  1. apiVersion: cilium.io/v2alpha1
  2. kind: CiliumEgressNATPolicy
  3. metadata:
  4. name: egress-sample
  5. spec:
  6. egress:
  7. - podSelector:
  8. matchLabels:
  9. org: empire
  10. class: mediabot
  11. # The following label selects default namespace
  12. io.kubernetes.pod.namespace: default
  13. # Or use namespace label selector to select multiple namespaces
  14. # namespaceSelector:
  15. # matchLabels:
  16. # ns: default
  17. destinationCIDRs:
  18. - 192.168.33.13/32
  19. egressSourceIP: "192.168.33.100"

Let’s switch back to the client pod and verify it works.

  1. $ kubectl exec mediabot -- curl http://192.168.33.13:80
  2. <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
  3. [...]

Verify access log from nginx node or service of your chose that the request is coming from egress IP now instead of one of the nodes in Kubernetes cluster. In the nginx’s case, you will see logs like the following shows that the request is coming from 192.168.33.100 now, instead of 192.168.33.11.

  1. $ tail /var/log/nginx/access.log
  2. [...]
  3. 192.168.33.100 - - [04/Apr/2021:22:06:57 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.52.1"