Gateway

When services need to receive traffic from the outside, commonly called North/South, the Kuma Gateway enables routing network traffic from outside a Kuma mesh to services inside the mesh. The gateway is also responsible for security at the entrance of the Mesh.

Kuma Gateway deploys as a Kuma Dataplane , that’s an instance of the kuma-dp process. Like all Kuma Dataplanes, the Kuma Gateway Dataplane manages an Envoy proxy process that does the actual network traffic proxying.

You can distinguish two types of gateways:

  • delegated: Allows users to use any existing gateway like Kong.
  • builtin: Configures the data plane proxy to expose external listeners to drive traffic inside the mesh.

Gateways exist within a mesh. If you have multiple meshes, each mesh requires its own gateway. You can easily connect your meshes together using cross-mesh gateways.

Below visualization shows the difference between delegated and builtin gateways:

Builtin with Kong Gateway to handle the inbound traffic:

Gateway - 图1

Delegated with Kong Gateway:

Gateway - 图2

The blue lines represent traffic not managed by Kuma, which needs configuring in the Gateway.

Delegated

The Dataplane entity can operate in gateway mode. This way you can integrate Kuma with existing API Gateways like Kong.

The gateway mode lets you skip exposing inbound listeners so it won’t be intercepting ingress traffic. When you use a data plane proxy with a service, both inbound traffic to a service and outbound traffic from the service flows through the proxy. In the gateway mode, you want inbound traffic to go directly to the gateway, otherwise, clients require dynamically generated certificates for communication between services within the mesh. The gateway itself should handle security at an entrance to the mesh.

Usage

Kuma supports most of the ingress controllers. However, the recommended gateway in Kubernetes is Kong. You can use Kong ingress controller for Kubernetes to implement authentication, transformations, and other functionalities across Kubernetes clusters with zero downtime. Most ingress controllers require an annotation ingress.kubernetes.io/service-upstream=true on every Kubernetes Service to work with Kuma. Kuma automatically injects the annotation for every Service in a namespace in a mesh that has kuma.io/sidecar-injection: enabled label.

To use the delegated gateway feature, mark your API Gateway’s Pod with the kuma.io/gateway: enabled annotation. Control plane automatically generates Dataplane objects.

For example:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. ...
  5. spec:
  6. template:
  7. metadata:
  8. annotations:
  9. kuma.io/gateway: enabled
  10. ...

API Gateway receives Services from:

  • one specific zone
  • multi-zone

Multi-zone requires exposing a dedicated Kubernetes Service object with type ExternalName. Control plane creates a DNS entry externalName with suffix .mesh, which Kuma resolves in internal service discovery.

Example setting up Kong Ingress Controller

Follow instructions to setup an echo service reachable through Kong. These instructions are mostly taken from the Kong docs.

  1. Install Kuma on your cluster and have the defaultnamespace labelled with sidecar-injection.

  2. Install Kong using helm.

  3. Start an echo-service:

    1. kubectl apply -f https://bit.ly/echo-service
  4. Add an ingress:

    1. echo "
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: demo
    6. spec:
    7. ingressClassName: kong
    8. rules:
    9. - http:
    10. paths:
    11. - path: /foo
    12. pathType: ImplementationSpecific
    13. backend:
    14. service:
    15. name: echo
    16. port:
    17. number: 80
    18. " | kubectl apply -f -

You can access your ingress with curl -i $PROXY_IP/foo where $PROXY_IP you can retrieve from the service that exposes Kong outside your cluster.

You can check that the sidecar is running by checking the number of containers in each pod:

  1. kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. echo-5fc5b5bc84-zr9kl 2/2 Running 0 41m
  4. kong-1645186528-kong-648b9596c7-f2xfv 3/3 Running 2 40m

Example Gateway in Multi-Zone

In the previous example, you setup an echo, that’s running on port 80, and deployed in the default namespace.

Now make sure that this service works correctly with multi-zone. In order to do so, create Service manually:

  1. echo "
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: echo-multizone
  6. namespace: default
  7. spec:
  8. type: ExternalName
  9. externalName: echo.default.svc.80.mesh
  10. " | kubectl apply -f -

Finally, you need to create a corresponding Kubernetes Ingress that routes /bar to the multi-zone service:

  1. echo "
  2. apiVersion: networking.k8s.io/v1
  3. kind: Ingress
  4. metadata:
  5. name: demo-multizone
  6. namespace: default
  7. spec:
  8. ingressClassName: kong
  9. rules:
  10. - http:
  11. paths:
  12. - path: /bar
  13. pathType: ImplementationSpecific
  14. backend:
  15. service:
  16. name: echo-multizone
  17. port:
  18. number: 80
  19. " | kubectl apply -f -

Note that since you are addressing the service by its domain name echo.default.svc.8080.mesh, you should always refer to port 80. This port is only a placeholder and is automatically replaced with the actual port of the service.

If you want to expose a Service in one zone only, as opposed to multi-zone, you can just use the service name in the Ingress definition without having to create an externalName entry, this is what you did in your first example.

For an in-depth example on deploying Kuma with Kong for Kubernetes, please follow this demo application guide.

On Universal, you can define the Dataplane entity like this:

  1. type: Dataplane
  2. mesh: default
  3. name: kong-01
  4. networking:
  5. address: 10.0.0.1
  6. gateway:
  7. type: DELEGATED
  8. tags:
  9. kuma.io/service: kong
  10. outbound:
  11. - port: 33033
  12. tags:
  13. kuma.io/service: backend

When configuring your API Gateway to pass traffic to backend set the url to http://localhost:33033

Builtin

The builtin gateway is integrated into the core Kuma control plane. You can configure gateway listeners and routes to service directly using Kuma policies.

The builtin gateway is configured on a Dataplane:

  1. type: Dataplane
  2. mesh: default
  3. name: gateway-instance-1
  4. networking:
  5. address: 127.0.0.1
  6. gateway:
  7. type: BUILTIN
  8. tags:
  9. kuma.io/service: edge-gateway

A builtin gateway Dataplane does not have either inbound or outbound configuration.

To configure your gateway Kuma has these resources:

  • MeshGateway is used to configure listeners exposed by the gateway
  • MeshGatewayRoute is used to configure route to route traffic from listeners to other services.

Kuma gateways are configured with the Envoy best practices for edge proxies.

Usage

You can create and configure a gateway that listens for traffic from outside of your mesh and forwards it to the demo app frontend.

To ease starting gateways on Kubernetes, Kuma comes with a builtin type MeshGatewayInstance.

This resource launches kuma-dp in your cluster. If you are running a multi-zone Kuma, MeshGatewayInstance needs to be created in a specific zone, not the global cluster. See the dedicated section for using builtin gateways on multi-zone.

This type requests that the control plane create and manage a Kubernetes Deployment and Service suitable for providing service capacity for the MeshGateway with the matching kuma.io/service tag.

  1. echo "
  2. apiVersion: kuma.io/v1alpha1
  3. kind: MeshGatewayInstance
  4. metadata:
  5. name: edge-gateway
  6. namespace: default
  7. spec:
  8. replicas: 1
  9. serviceType: LoadBalancer
  10. tags:
  11. kuma.io/service: edge-gateway
  12. " | kubectl apply -f -

Once a MeshGateway exists that matches the kuma.io/service tag, the control plane creates a new Deployment in the default namespace. This Deployment has the requested number of builtin gateway Dataplane pod replicas running as the service named in the MeshGatewayInstance tags. The control plane also creates a new Service to send network traffic to the builtin Dataplane pods. The Service is of the type requested in the MeshGatewayInstance, and its ports are automatically adjusted to match the listeners on the corresponding MeshGateway.

Customization

Additional customization of the generated Service or Pods is possible via MeshGatewayInstance.spec. For example, you can add annotations and/or labels to the generated objects:

  1. spec:
  2. replicas: 1
  3. serviceType: LoadBalancer
  4. tags:
  5. kuma.io/service: edge-gateway
  6. resources:
  7. limits: ...
  8. requests: ...
  9. serviceTemplate:
  10. metadata:
  11. annotations:
  12. service.beta.kubernetes.io/aws-load-balancer-internal: "true"
  13. spec:
  14. loadBalancerIP: ...
  15. podTemplate:
  16. metadata:
  17. labels:
  18. app-name: my-app
  19. ...

You can also modify several security-related parameters for the generated Pods, and specify a loadBalancerIP for the Service:

  1. spec:
  2. replicas: 1
  3. serviceType: LoadBalancer
  4. tags:
  5. kuma.io/service: edge-gateway
  6. resources:
  7. limits: ...
  8. requests: ...
  9. serviceTemplate:
  10. metadata:
  11. labels:
  12. svc-id: "19-001"
  13. spec:
  14. loadBalancerIP: ...
  15. podTemplate:
  16. metadata:
  17. annotations:
  18. app-monitor: "false"
  19. spec:
  20. serviceAccountName: my-sa
  21. securityContext:
  22. fsGroup: ...
  23. container:
  24. securityContext:
  25. readOnlyRootFilesystem: true

The first thing you’ll need is to create a Dataplane object for your gateway:

  1. type: Dataplane
  2. mesh: default
  3. name: gateway-instance-1
  4. networking:
  5. address: 127.0.0.1
  6. gateway:
  7. type: BUILTIN
  8. tags:
  9. kuma.io/service: edge-gateway

Note that this gateway has a kuma.io/service tag. Use this to bind policies to configure this gateway.

As you’re in universal you now need to run kuma-dp:

  1. kuma-dp run \
  2. --cp-address=https://localhost:5678/ \
  3. --dns-enabled=false \
  4. --dataplane-token-file=kuma-token-gateway \ # this needs to be generated like for regular Dataplane
  5. --dataplane-file=my-gateway.yaml # the Dataplane resource described above

Now let’s create a MeshGateway to configure the listeners:

  1. echo "
  2. apiVersion: kuma.io/v1alpha1
  3. kind: MeshGateway
  4. mesh: default
  5. metadata:
  6. name: edge-gateway
  7. spec:
  8. selectors:
  9. - match:
  10. kuma.io/service: edge-gateway
  11. conf:
  12. listeners:
  13. - port: 8080
  14. protocol: HTTP
  15. hostname: foo.example.com
  16. tags:
  17. port: http/8080
  18. " | kubectl apply -f -
  1. type: MeshGateway
  2. mesh: default
  3. name: edge-gateway
  4. selectors:
  5. - match:
  6. kuma.io/service: edge-gateway
  7. conf:
  8. listeners:
  9. - port: 8080
  10. protocol: HTTP
  11. hostname: foo.example.com
  12. tags:
  13. port: http/8080

The MeshGateway creates a listener on port 8080 and will accept any traffic which has the Host header set to foo.example.com. Notice that listeners have tags like Dataplanes. This will be useful when binding routes to listeners.

These are Kuma policies so if you are running on multi-zone they need to be created on the Global CP. See the dedicated section for using builtin gateways on multi-zone.

Now, you can define a MeshGatewayRoute to forward your traffic based on the matched URL path.

  1. echo "
  2. apiVersion: kuma.io/v1alpha1
  3. kind: MeshGatewayRoute
  4. mesh: default
  5. metadata:
  6. name: edge-gateway-route
  7. spec:
  8. selectors:
  9. - match:
  10. kuma.io/service: edge-gateway
  11. port: http/8080
  12. conf:
  13. http:
  14. rules:
  15. - matches:
  16. - path:
  17. match: PREFIX
  18. value: /
  19. backends:
  20. - destination:
  21. kuma.io/service: demo-app_kuma-demo_svc_5000
  22. " | kubectl apply -f -
  1. type: MeshGatewayRoute
  2. mesh: default
  3. name: edge-gateway-route
  4. selectors:
  5. - match:
  6. kuma.io/service: edge-gateway
  7. port: http/8080
  8. conf:
  9. http:
  10. rules:
  11. - matches:
  12. - path:
  13. match: PREFIX
  14. value: /
  15. backends:
  16. - destination:
  17. kuma.io/service: demo-app_kuma-demo_svc_5000

TCP

The builtin gateway also supports TCP MeshGatewayRoutes:

  1. type: MeshGateway
  2. mesh: default
  3. name: edge-gateway
  4. selectors:
  5. - match:
  6. kuma.io/service: edge-gateway
  7. conf:
  8. listeners:
  9. - port: 8080
  10. protocol: TCP
  11. tags:
  12. port: tcp/8080
  13. ---
  14. type: MeshGatewayRoute
  15. mesh: default
  16. name: edge-gateway-route
  17. selectors:
  18. - match:
  19. kuma.io/service: edge-gateway
  20. port: tcp/8080
  21. conf:
  22. tcp:
  23. rules:
  24. - backends:
  25. - destination:
  26. kuma.io/service: redis_kuma-demo_svc_6379

The TCP configuration only supports the backends key (no matches or filters). There are no TCP-generic ways to filter or match traffic so it can only load balance.

Multi-zone

The Kuma Gateway resource types, MeshGateway and MeshGatewayRoute, are synced across zones by the Kuma control plane. If you have a multi-zone deployment, follow existing Kuma practice and create any Kuma Gateway resources in the global control plane. Once these resources exist, you can provision serving capacity in the zones where it is needed by deploying builtin gateway Dataplanes (in Universal zones) or MeshGatewayInstances (Kubernetes zones).

See the multi-zone docs for a refresher.

Cross-mesh

The Mesh abstraction allows users to encapsulate and isolate services inside a kind of submesh with its own CA. With a cross-mesh MeshGateway, you can expose the services of one Mesh to other Meshes by defining an API with MeshGatewayRoutes. All traffic remains inside the Kuma data plane protected by mTLS.

All meshes involved in cross-mesh communication must have mTLS enabled. To enable cross-mesh functionality for a MeshGateway listener, set the crossMesh property.

  1. ...
  2. mesh: default
  3. selectors:
  4. - match:
  5. kuma.io/service: cross-mesh-gateway
  6. conf:
  7. listeners:
  8. - port: 8080
  9. protocol: HTTP
  10. crossMesh: true
  11. hostname: default.mesh

Hostname

If the listener includes a hostname value, the cross-mesh listener will be reachable from all Meshes at this hostname and port. In this case, the URL http://default.mesh:8080.

Otherwise it will be reachable at the host: internal.<gateway-name>.<mesh-of-gateway-name>.mesh.

Without transparent proxy

If transparent proxy isn’t set up, you’ll have to add the listener explicitly as an outbound to your Dataplane objects if you want to access it:

  1. ...
  2. outbound
  3. - port: 8080
  4. tags:
  5. kuma.io/service: cross-mesh-gateway
  6. kuma.io/mesh: default

Limitations

Cross-mesh functionality isn’t supported across zones at the moment but will be in a future release.

The only protocol supported is HTTP. Like service to service traffic, all traffic to the gateway is protected with mTLS but appears to be HTTP traffic to the applications inside the mesh. In the future, this limitation may be relaxed.

There can be only one entry in selectors for a MeshGateway with crossMesh: true.

Policy support

Not all Kuma policies are applicable to Kuma Gateway (see table below). Kuma connection policies are selected by matching the source and destination expressions against sets of Kuma tags. In the case of Kuma Gateway the source selector is always matched against the Gateway listener tags, and the destination expression is matched against the backend destination tags configured on a Gateway Route.

When a Gateway Route forwards traffic, it may weight the traffic across multiple services. In this case, matching the destination for a connection policy becomes ambiguous. Although the traffic is proxied to more than one distinct service, Kuma can only configure the route with one connection policy. In this case, Kuma employs some simple heuristics to choose the policy. If all the backend destinations refer to the same service, Kuma will choose the oldest connection policy that has a matching destination service. However, if the backend destinations refer to different services, Kuma will prefer a connection policy with a wildcard destination (i.e. where the destination service is *).

Kuma may select different connection policies of the same type depending on the context. For example, when Kuma configures an Envoy route, there may be multiple candidate policies (due to the traffic splitting across destination services), but when Kuma configures an Envoy cluster there is usually only a single candidate (because clusters are defined to be a single service). This can result in situations where different policies (of the same type) are used for different parts of the Envoy configuration.

PolicyGatewaySupport
Circuit BreakerFull
External ServicesFull
Fault InjectionFull
Health CheckFull
Proxy TemplateFull
Rate LimitsFull
RetriesFull
Traffic PermissionsFull
Traffic RoutesNone
Traffic LogPartial
TimeoutsFull
VirtualOutboundsNone

You can find in each policy’s dedicated information with regard to builtin gateway support.