Gateway

Kuma Gateway is a Kuma component that routes network traffic from outside a Kuma mesh to services inside the mesh. The gateway can be thought of as having one foot outside the mesh to receive traffic and one foot inside the mesh to route this external traffic to services inside the mesh.

When you use a data plane proxy with a service, both inbound traffic to a service and outbound traffic from the service flows through the proxy. Gateway should be deployed as any other service within the mesh. However, in this case we want inbound traffic to go directly to the gateway, otherwise clients would have to be provided with certificates that are generated dynamically for communication between services within the mesh. Security for an entrance to the mesh should be handled by Gateway itself.

Kuma Gateway is deployed as a Kuma Dataplane, i.e. an instance of the kuma-dp process. Like all Kuma Dataplanes, the Kuma Gateway Dataplane manages an Envoy proxy process that does the actual network traffic proxying.

There exists two types of gateways:

  • Delegated: Which enables users to use any existing gateway like Kong.
  • Builtin: configures the data plane proxy to expose external listeners to drive traffic inside the mesh.

Gateways exist within a mesh. If you have multiple meshes, each mesh will need its own gateway.

Delegated

The Dataplane entity can operate in gateway mode. This way you can integrate Kuma with existing API Gateways like Kong.

Gateway mode lets you skip exposing inbound listeners so it won’t be intercepting ingress traffic.

Usage

On Universal, you can define the Dataplane entity like this:

  1. type: Dataplane
  2. mesh: default
  3. name: kong-01
  4. networking:
  5. address: 10.0.0.1
  6. gateway:
  7. type: DELEGATED
  8. tags:
  9. kuma.io/service: kong
  10. outbound:
  11. - port: 33033
  12. tags:
  13. kuma.io/service: backend

When configuring your API Gateway to pass traffic to backend set the url to http://localhost:33033

While most ingress controllers are supported in Kuma, the recommended gateway in Kubernetes is Kong. You can use Kong ingress controller for Kubernetes to implement authentication, transformations, and other functionalities across Kubernetes clusters with zero downtime. To work with Kuma, most ingress controllers require an annotation on every Kubernetes Service that you want to pass traffic to ingress.kubernetes.io/service-upstream=true. Kuma automatically injects this annotation for every Service that is in a namespace part of the mesh i.e. has kuma.io/sidecar-injection: enabled label.

Like for regular dataplanes the Dataplane entities are automatically generated. To inject gateway data planes, mark your API Gateway’s Pod with kuma.io/gateway: enabled annotation. For example:

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. ...
  5. spec:
  6. template:
  7. metadata:
  8. annotations:
  9. kuma.io/gateway: enabled
  10. ...

Services can be exposed to an API Gateway in one specific zone, or in multi-zone. For the latter, we need to expose a dedicated Kubernetes Service object with type ExternalName, which sets the externalName to the .mesh DNS record for the particular service that we want to expose, that will be resolved by Kuma’s internal service discovery.

Example setting up Kong Ingress Controller

We will follow these instructions to setup an echo service that is reached through Kong. These instructions are mostly taken from the Kong docs.

To get started install Kuma on your cluster and have the default namespace labelled with sidecar-injection.

Install Kong using helm.

Start an echo-service:

  1. kubectl apply -f https://bit.ly/echo-service

And an ingress:

  1. echo "
  2. apiVersion: networking.k8s.io/v1
  3. kind: Ingress
  4. metadata:
  5. name: demo
  6. spec:
  7. ingressClassName: kong
  8. rules:
  9. - http:
  10. paths:
  11. - path: /foo
  12. pathType: ImplementationSpecific
  13. backend:
  14. service:
  15. name: echo
  16. port:
  17. number: 80
  18. " | kubectl apply -f -

You can access your ingress with curl -i $PROXY_IP/foo where $PROXY_IP is the ip retrieved from the service that exposes Kong outside your cluster.

You can check that the sidecar is running by checking the number of containers in each pod:

  1. kubectl get pods
  2. NAME READY STATUS RESTARTS AGE
  3. echo-5fc5b5bc84-zr9kl 2/2 Running 0 41m
  4. kong-1645186528-kong-648b9596c7-f2xfv 3/3 Running 2 40m

Example Gateway in Multi-Zone

In the previous example, we setup a echo (that is running on port 80) and deployed in the default namespace.

We will now make sure that this service works correctly with multi-zone. In order to do so, the following Service needs to be created manually:

  1. echo "
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: echo-multizone
  6. namespace: default
  7. spec:
  8. type: ExternalName
  9. externalName: echo.default.svc.80.mesh
  10. " | kubectl apply -f -

Finally, we need to create a corresponding Kubernetes Ingress that routes /bar to the multi-zone service:

  1. echo "
  2. apiVersion: networking.k8s.io/v1
  3. kind: Ingress
  4. metadata:
  5. name: demo-multizone
  6. namespace: default
  7. spec:
  8. ingressClassName: kong
  9. rules:
  10. - http:
  11. paths:
  12. - path: /bar
  13. pathType: ImplementationSpecific
  14. backend:
  15. service:
  16. name: echo-multizone
  17. port:
  18. number: 80
  19. " | kubectl apply -f -

Note that since we are addressing the service by its domain name echo.default.svc.8080.mesh, we should always refer to port 80 (this port is only a placeholder and will be automatically replaced with the actual port of the service).

If we want to expose a Service in one zone only (as opposed to multi-zone), we can just use the service name in the Ingress definition without having to create an externalName entry, this is what we did in our first example.

For an in-depth example on deploying Kuma with Kong for Kubernetes, please follow this demo application guide.

Builtin

The builtin gateway is currently experimental and is enabled with the kuma-cp flag --experimental-meshgateway or the environment variable KUMA_EXPERIMENTAL_MESHGATEWAY

The builtin type of gateway is integrated into the core Kuma control plane. You can therefore configure gateway listeners and routes to service directly using Kuma policies.

As with provided gateway, the builtin gateway is configured with a dataplane:

  1. type: Dataplane
  2. mesh: default
  3. name: gateway-instance-1
  4. networking:
  5. address: 127.0.0.1
  6. gateway:
  7. type: BUILTIN
  8. tags:
  9. kuma.io/service: edge-gateway

A builtin gateway Dataplane does not have either inbound or outbound configuration.

To configure your gateway Kuma has these resources:

  • MeshGateway is used to configure listeners exposed by the gateway
  • MeshGatewayRoute is used to configure route to route traffic from listeners to other services.

Usage

We will set up a simple gateway that exposes a http listener and 2 routes to imaginary services: “frontend” and “api”.

The first thing you’ll need is to create a dataplane object for your gateway:

  1. type: Dataplane
  2. mesh: default
  3. name: gateway-instance-1
  4. networking:
  5. address: 127.0.0.1
  6. gateway:
  7. type: BUILTIN
  8. tags:
  9. kuma.io/service: edge-gateway

Note that this gateway has a kuma.io/service tag. We will use this to bind policies to configure this gateway.

As we’re in universal you now need to run kuma-dp:

  1. kuma-dp run \
  2. --cp-address=https://localhost:5678/ \
  3. --dns-enabled=false \
  4. --dataplane-token-file=kuma-token-gateway \ # this needs to be generated like for regular Data plane
  5. --dataplane-file=my-gateway.yaml # the dataplane resource described above

To ease starting gateways on Kubernetes Kuma comes with a builtin type “MeshGatewayInstance”. This type requests that the control plane create and manage a Kubernetes Deployment and Service suitable for providing service capacity for the Gateway with the matching service tags.

  1. echo "
  2. apiVersion: kuma.io/v1alpha1
  3. kind: MeshGatewayInstance
  4. metadata:
  5. name: edge-gateway
  6. namespace: default
  7. spec:
  8. replicas: 1
  9. serviceType: LoadBalancer
  10. tags:
  11. kuma.io/service: edge-gateway
  12. " | kubectl apply -f -

In the example above, the control plane will create a new Deployment in the gateways namespace. This deployment will have the requested number of builtin gateway Dataplane pod replicas, which will be configured as part of the service named in the MeshGatewayInstance tags. When a Kuma MeshGateway is matched to the MeshGatewayInstance, the control plane will also create a new Service to send network traffic to the builtin Dataplane pods. The Service will be of the type requested in the MeshGatewayInstance, and its ports will automatically be adjusted to match the listeners on the corresponding MeshGateway.

Now that the dataplane is running we can describe the gateway listener:

  1. type: MeshGateway
  2. mesh: default
  3. name: edge-gateway
  4. selectors:
  5. - match:
  6. kuma.io/service: edge-gateway
  7. conf:
  8. listeners:
  9. - port: 8080
  10. protocol: HTTP
  11. hostname: foo.example.com
  12. tags:
  13. port: http/8080
  1. echo "
  2. apiVersion: kuma.io/v1alpha1
  3. kind: MeshGateway
  4. mesh: default
  5. metadata:
  6. name: edge-gateway
  7. spec:
  8. selectors:
  9. - match:
  10. kuma.io/service: edge-gateway
  11. conf:
  12. listeners:
  13. - port: 8080
  14. protocol: HTTP
  15. hostname: foo.example.com
  16. tags:
  17. port: http/8080
  18. " | kubectl apply -f -

This policy creates a listener on port 8080 and will receive any traffic which has the Host header set to foo.example.com. Notice that listeners have tags like dataplanes. This will be useful when binding routes to listeners.

These are Kuma policies so if you are running on multi-zone they need to be created on the Global CP. See the dedicated section for detailed information.

We will now define our routes which will take traffic and route it either to our api or our frontend depending on the path of the http request:

  1. type: MeshGatewayRoute
  2. mesh: default
  3. name: edge-gateway-route
  4. selectors:
  5. - match:
  6. kuma.io/service: edge-gateway
  7. port: http/8080
  8. conf:
  9. http:
  10. rules:
  11. - matches:
  12. - path:
  13. match: PREFIX
  14. value: /
  15. backends:
  16. - destination:
  17. kuma.io/service: demo-app_kuma-demo_svc_5000
  1. echo "
  2. apiVersion: kuma.io/v1alpha1
  3. kind: MeshGatewayRoute
  4. mesh: default
  5. metadata:
  6. name: edge-gateway-route
  7. spec:
  8. selectors:
  9. - match:
  10. kuma.io/service: edge-gateway
  11. port: http/8080
  12. conf:
  13. http:
  14. rules:
  15. - matches:
  16. - path:
  17. match: PREFIX
  18. value: /
  19. backends:
  20. - destination:
  21. kuma.io/service: demo-app_kuma-demo_svc_5000
  22. " | kubectl apply -f -

Because routes are applied in order of specificity the first route will take precedence over the second one. So /api/foo will go to the api service whereas /asset will go to the frontend service.

Multi-zone

The Kuma Gateway resource types, MeshGateway and MeshGatewayRoute, are synced across zones by the Kuma control plane. If you have a multi-zone deployment, follow existing Kuma practice and create any Kuma Gateway resources in the global control plane. Once these resources exist, you can provision serving capacity in the zones where it is needed by deploying builtin gateway Dataplanes (in Universal zones) or MeshGatewayInstances (Kubernetes zones).

Policy support

Not all Kuma policies are applicable to Kuma Gateway (see table below). Kuma connection policies are selected by matching the source and destination expressions against sets of Kuma tags. In the case of Kuma Gateway the source selector is always matched against the Gateway listener tags, and the destination expression is matched against the backend destination tags configured on a Gateway Route.

When a Gateway Route forwards traffic, it may weight the traffic across multiple services. In this case, matching the destination for a connection policy becomes ambiguous. Although the traffic is proxied to more than one distinct service, Kuma can only configure the route with one connection policy. In this case, Kuma employs some simple heuristics to choose the policy. If all the backend destinations refer to the same service, Kuma will choose the oldest connection policy that has a matching destination service. However, if the backend destinations refer to different services, Kuma will prefer a connection policy with a wildcard destination (i.e. where the destination service is *).

Kuma may select different connection policies of the same type depending on the context. For example, when Kuma configures an Envoy route, there may be multiple candidate policies (due to the traffic splitting across destination services), but when Kuma configures an Envoy cluster there is usually only a single candidate (because clusters are defined to be a single service). This can result in situations where different policies (of the same type) are used for different parts of the Envoy configuration.

PolicyGatewaySupport
Circuit BreakerFull
External ServicesFull
Fault InjectionFull
Health CheckFull
Proxy TemplateFull
Rate LimitsFull
RetriesFull
Traffic PermissionsFull
Traffic RoutesNone
Traffic LogPartial
TimeoutsFull
VirtualOutboundsNone

You can find in each policy’s dedicated information with regard to builtin gateway support.