Ingress Access Control

This task shows you how to enforce IP-based access control on an Istio ingress gateway using an authorization policy.

Istio includes beta support for the Kubernetes Gateway API and intends to make it the default API for traffic management in the future. The following instructions allow you to choose to use either the Gateway API or the Istio configuration API when configuring traffic management in the mesh. Follow instructions under either the Gateway API or Istio APIs tab, according to your preference.

Note that the Kubernetes Gateway API CRDs do not come installed by default on most Kubernetes clusters, so make sure they are installed before using the Gateway API:

  1. $ kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
  2. { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=444631bfe06f3bcca5d0eadf1857eac1d369421d" | kubectl apply -f -; }

Before you begin

Before you begin this task, do the following:

  • Read the Istio authorization concepts.

  • Install Istio using the Istio installation guide.

  • Deploy a workload, httpbin, in namespace foo with sidecar injection enabled:

    Zip

    1. $ kubectl create ns foo
    2. $ kubectl label namespace foo istio-injection=enabled
    3. $ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n foo
  • Expose httpbin through an ingress gateway:

Configure the gateway:

Zip

  1. $ kubectl apply -f @samples/httpbin/httpbin-gateway.yaml@ -n foo

Turn on RBAC debugging in Envoy for the ingress gateway:

  1. $ kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do istioctl proxy-config log "$pod" -n istio-system --level rbac:debug; done

Follow the instructions in Determining the ingress IP and ports to define the INGRESS_PORT and INGRESS_HOST environment variables.

Create the gateway:

Zip

  1. $ kubectl apply -f @samples/httpbin/gateway-api/httpbin-gateway.yaml@ -n foo
  2. $ kubectl wait --for=condition=programmed gtw -n foo httpbin-gateway

Turn on RBAC debugging in Envoy for the ingress gateway:

  1. $ kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do istioctl proxy-config log "$pod" -n foo --level rbac:debug; done

Set the INGRESS_PORT and INGRESS_HOST environment variables:

  1. $ export INGRESS_HOST=$(kubectl get gtw httpbin-gateway -n foo -o jsonpath='{.status.addresses[0].value}')
  2. $ export INGRESS_PORT=$(kubectl get gtw httpbin-gateway -n foo -o jsonpath='{.spec.listeners[?(@.name=="http")].port}')
  • Verify that the httpbin workload and ingress gateway are working as expected using this command:

    1. $ curl "$INGRESS_HOST:$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
    2. 200

    If you don’t see the expected output, retry after a few seconds. Caching and propagation overhead can cause a delay.

Getting traffic into Kubernetes and Istio

All methods of getting traffic into Kubernetes involve opening a port on all worker nodes. The main features that accomplish this are the NodePort service and the LoadBalancer service. Even the Kubernetes Ingress resource must be backed by an Ingress controller that will create either a NodePort or a LoadBalancer service.

  • A NodePort just opens up a port in the range 30000-32767 on each worker node and uses a label selector to identify which Pods to send the traffic to. You have to manually create some kind of load balancer in front of your worker nodes or use Round-Robin DNS.

  • A LoadBalancer is just like a NodePort, except it also creates an environment specific external load balancer to handle distributing traffic to the worker nodes. For example, in AWS EKS, the LoadBalancer service will create a Classic ELB with your worker nodes as targets. If your Kubernetes environment does not have a LoadBalancer implementation, then it will just behave like a NodePort. An Istio ingress gateway creates a LoadBalancer service.

What if the Pod that is handling traffic from the NodePort or LoadBalancer isn’t running on the worker node that received the traffic? Kubernetes has its own internal proxy called kube-proxy that receives the packets and forwards them to the correct node.

Source IP address of the original client

If a packet goes through an external proxy load balancer and/or kube-proxy, then the original source IP address of the client is lost. The following subsections describe some strategies for preserving the original client IP for logging or security purpose for different load balancer types:

  1. TCP/UDP Proxy Load Balancer
  2. Network Load Balancer
  3. HTTP/HTTPS Load Balancer

For reference, here are the types of load balancers created by Istio with a LoadBalancer service on popular managed Kubernetes environments:

Cloud ProviderLoad Balancer NameLoad Balancer Type
AWS EKSClassic Elastic Load BalancerTCP Proxy
GCP GKETCP/UDP Network Load BalancerNetwork
Azure AKSAzure Load BalancerNetwork
IBM IKS/ROKSNetwork Load BalancerNetwork
DO DOKSLoad BalancerNetwork

You can instruct AWS EKS to create a Network Load Balancer with an annotation on the gateway service:

  1. apiVersion: install.istio.io/v1alpha1
  2. kind: IstioOperator
  3. spec:
  4. meshConfig:
  5. accessLogEncoding: JSON
  6. accessLogFile: /dev/stdout
  7. components:
  8. ingressGateways:
  9. - enabled: true
  10. k8s:
  11. hpaSpec:
  12. maxReplicas: 10
  13. minReplicas: 5
  14. serviceAnnotations:
  15. service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
  1. apiVersion: gateway.networking.k8s.io/v1beta1
  2. kind: Gateway
  3. metadata:
  4. name: httpbin-gateway
  5. annotations:
  6. service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
  7. spec:
  8. gatewayClassName: istio
  9. ...

TCP/UDP Proxy Load Balancer

If you are using a TCP/UDP Proxy external load balancer (AWS Classic ELB), it can use the PROXY Protocol to embed the original client IP address in the packet data. Both the external load balancer and the Istio ingress gateway must support the PROXY protocol for it to work.

Here is a sample configuration that shows how to make an ingress gateway on AWS EKS support the PROXY Protocol:

  1. apiVersion: install.istio.io/v1alpha1
  2. kind: IstioOperator
  3. spec:
  4. meshConfig:
  5. accessLogEncoding: JSON
  6. accessLogFile: /dev/stdout
  7. defaultConfig:
  8. gatewayTopology:
  9. proxyProtocol: {}
  10. components:
  11. ingressGateways:
  12. - enabled: true
  13. name: istio-ingressgateway
  14. k8s:
  15. hpaSpec:
  16. maxReplicas: 10
  17. minReplicas: 5
  18. serviceAnnotations:
  19. service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
  20. ...
  1. apiVersion: gateway.networking.k8s.io/v1beta1
  2. kind: Gateway
  3. metadata:
  4. name: httpbin-gateway
  5. annotations:
  6. service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
  7. proxy.istio.io/config: '{"gatewayTopology" : { "proxyProtocol": {} }}'
  8. spec:
  9. gatewayClassName: istio
  10. ...
  11. ---
  12. apiVersion: autoscaling/v2
  13. kind: HorizontalPodAutoscaler
  14. metadata:
  15. name: httpbin-gateway
  16. spec:
  17. scaleTargetRef:
  18. apiVersion: apps/v1
  19. kind: Deployment
  20. name: httpbin-gateway-istio
  21. minReplicas: 5
  22. maxReplicas: 10

Network Load Balancer

If you are using a TCP/UDP network load balancer that preserves the client IP address (AWS Network Load Balancer, GCP External Network Load Balancer, Azure Load Balancer) or you are using Round-Robin DNS, then you can use the externalTrafficPolicy: Local setting to also preserve the client IP inside Kubernetes by bypassing kube-proxy and preventing it from sending traffic to other nodes.

For production deployments it is strongly recommended to deploy an ingress gateway pod to multiple nodes if you enable externalTrafficPolicy: Local. Otherwise, this creates a situation where only nodes with an active ingress gateway pod will be able to accept and distribute incoming NLB traffic to the rest of the cluster, creating potential ingress traffic bottlenecks and reduced internal load balancing capability, or even complete loss of ingress traffic to the cluster if the subset of nodes with ingress gateway pods go down. See Source IP for Services with Type=NodePort for more information.

Update the ingress gateway to set externalTrafficPolicy: Local to preserve the original client source IP on the ingress gateway using the following command:

  1. $ kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'
  1. $ kubectl patch svc httpbin-gateway-istio -n foo -p '{"spec":{"externalTrafficPolicy":"Local"}}'

HTTP/HTTPS Load Balancer

If you are using an HTTP/HTTPS external load balancer (AWS ALB, GCP ), it can put the original client IP address in the X-Forwarded-For header. Istio can extract the client IP address from this header with some configuration. See Configuring Gateway Network Topology. Quick example if using a single load balancer in front of Kubernetes:

  1. apiVersion: install.istio.io/v1alpha1
  2. kind: IstioOperator
  3. spec:
  4. meshConfig:
  5. accessLogEncoding: JSON
  6. accessLogFile: /dev/stdout
  7. defaultConfig:
  8. gatewayTopology:
  9. numTrustedProxies: 1

IP-based allow list and deny list

When to use ipBlocks vs. remoteIpBlocks: If you are using the X-Forwarded-For HTTP header or the PROXY Protocol to determine the original client IP address, then you should use remoteIpBlocks in your AuthorizationPolicy. If you are using externalTrafficPolicy: Local, then you should use ipBlocks in your AuthorizationPolicy.

Load Balancer TypeSource of Client IPipBlocks vs. remoteIpBlocks
TCP ProxyPROXY ProtocolremoteIpBlocks
Networkpacket source addressipBlocks
HTTP/HTTPSX-Forwarded-ForremoteIpBlocks
  • The following command creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.

ipBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: istio-system
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: istio-ingressgateway
  11. action: ALLOW
  12. rules:
  13. - from:
  14. - source:
  15. ipBlocks: ["1.2.3.4", "5.6.7.0/24"]
  16. EOF

remoteIpBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: istio-system
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: istio-ingressgateway
  11. action: ALLOW
  12. rules:
  13. - from:
  14. - source:
  15. remoteIpBlocks: ["1.2.3.4", "5.6.7.0/24"]
  16. EOF

ipBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: foo
  7. spec:
  8. targetRef:
  9. kind: Gateway
  10. group: gateway.networking.k8s.io
  11. name: httpbin-gateway
  12. action: ALLOW
  13. rules:
  14. - from:
  15. - source:
  16. ipBlocks: ["1.2.3.4", "5.6.7.0/24"]
  17. EOF

remoteIpBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: foo
  7. spec:
  8. targetRef:
  9. kind: Gateway
  10. group: gateway.networking.k8s.io
  11. name: httpbin-gateway
  12. action: ALLOW
  13. rules:
  14. - from:
  15. - source:
  16. remoteIpBlocks: ["1.2.3.4", "5.6.7.0/24"]
  17. EOF
  • Verify that a request to the ingress gateway is denied:

    1. $ curl "$INGRESS_HOST:$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
    2. 403
  • Assign your original client IP address to an env variable. If you don’t know it, you can an find it in the Envoy logs using the following command:

ipBlocks:

  1. $ CLIENT_IP=$(kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system | grep remoteIP; done | tail -1 | awk -F, '{print $3}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT_IP"
  2. 192.168.10.15

remoteIpBlocks:

  1. $ CLIENT_IP=$(kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system | grep remoteIP; done | tail -1 | awk -F, '{print $4}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT_IP"
  2. 192.168.10.15

ipBlocks:

  1. $ CLIENT_IP=$(kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n foo | grep remoteIP; done | tail -1 | awk -F, '{print $3}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT_IP"
  2. 192.168.10.15

remoteIpBlocks:

  1. $ CLIENT_IP=$(kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n foo | grep remoteIP; done | tail -1 | awk -F, '{print $4}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT_IP"
  2. 192.168.10.15
  • Update the ingress-policy to include your client IP address:

ipBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: istio-system
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: istio-ingressgateway
  11. action: ALLOW
  12. rules:
  13. - from:
  14. - source:
  15. ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
  16. EOF

remoteIpBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: istio-system
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: istio-ingressgateway
  11. action: ALLOW
  12. rules:
  13. - from:
  14. - source:
  15. remoteIpBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
  16. EOF

ipBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: foo
  7. spec:
  8. targetRef:
  9. kind: Gateway
  10. group: gateway.networking.k8s.io
  11. name: httpbin-gateway
  12. action: ALLOW
  13. rules:
  14. - from:
  15. - source:
  16. ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
  17. EOF

remoteIpBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: foo
  7. spec:
  8. targetRef:
  9. kind: Gateway
  10. group: gateway.networking.k8s.io
  11. name: httpbin-gateway
  12. action: ALLOW
  13. rules:
  14. - from:
  15. - source:
  16. remoteIpBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
  17. EOF
  • Verify that a request to the ingress gateway is allowed:

    1. $ curl "$INGRESS_HOST:$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
    2. 200
  • Update the ingress-policy authorization policy to set the action key to DENY so that the IP addresses specified in the ipBlocks are not allowed to access the ingress gateway:

ipBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: istio-system
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: istio-ingressgateway
  11. action: DENY
  12. rules:
  13. - from:
  14. - source:
  15. ipBlocks: ["$CLIENT_IP"]
  16. EOF

remoteIpBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: istio-system
  7. spec:
  8. selector:
  9. matchLabels:
  10. app: istio-ingressgateway
  11. action: DENY
  12. rules:
  13. - from:
  14. - source:
  15. remoteIpBlocks: ["$CLIENT_IP"]
  16. EOF

ipBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: foo
  7. spec:
  8. targetRef:
  9. kind: Gateway
  10. group: gateway.networking.k8s.io
  11. name: httpbin-gateway
  12. action: DENY
  13. rules:
  14. - from:
  15. - source:
  16. ipBlocks: ["$CLIENT_IP"]
  17. EOF

remoteIpBlocks:

  1. $ kubectl apply -f - <<EOF
  2. apiVersion: security.istio.io/v1
  3. kind: AuthorizationPolicy
  4. metadata:
  5. name: ingress-policy
  6. namespace: foo
  7. spec:
  8. targetRef:
  9. kind: Gateway
  10. group: gateway.networking.k8s.io
  11. name: httpbin-gateway
  12. action: DENY
  13. rules:
  14. - from:
  15. - source:
  16. remoteIpBlocks: ["$CLIENT_IP"]
  17. EOF
  • Verify that a request to the ingress gateway is denied:

    1. $ curl "$INGRESS_HOST:$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
    2. 403
  • You could use an online proxy service to access the ingress gateway using a different client IP to verify the request is allowed.

  • If you are not getting the responses you expect, view the ingress gateway logs which should show RBAC debugging information:

  1. $ kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system; done
  1. $ kubectl get pods -n foo -o name -l gateway.networking.k8s.io/gateway-name=httpbin-gateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n foo; done

Clean up

  • Remove the authorization policy:
  1. $ kubectl delete authorizationpolicy ingress-policy -n istio-system
  1. $ kubectl delete authorizationpolicy ingress-policy -n foo
  • Remove the namespace foo:

    1. $ kubectl delete namespace foo