Policies

Need help? Installing and using Kuma should be as easy as possible. Contact and chat with the community in real-time if you get stuck or need clarifications. We are here to help.

Here you can find the list of Policies that Kuma supports, that will allow you to build a modern and reliable Service Mesh.

Applying Policies

Once installed, Kuma can be configured via its policies. You can apply policies with kumactl on Universal, and with kubectl on Kubernetes. Regardless of what environment you use, you can always read the latest Kuma state with kumactl on both environments.

We follow the best practices. You should always change your Kubernetes state with CRDs, that's why Kuma disables kumactl apply [..] when running in K8s environments.

These policies can be applied either by file via the kumactl apply -f [path] or kubectl apply -f [path] syntax, or by using the following command:

  1. echo "
  2. type: ..
  3. spec: ..
  4. " | kumactl apply -f -

or - on Kubernetes - by using the equivalent:

  1. echo "
  2. apiVersion: kuma.io/v1alpha1
  3. kind: ..
  4. spec: ..
  5. " | kubectl apply -f -

Below you can find the policies that Kuma supports. In addition to kumactl, you can also retrive the state via the Kuma HTTP API as well.

Mesh

This policy allows to create multiple Service Meshes on top of the same Kuma cluster.

On Universal:

  1. type: Mesh
  2. name: default

On Kuberentes:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: Mesh
  3. metadata:
  4. namespace: kuma-system
  5. name: default

Mutual TLS

This policy enables automatic encrypted mTLS traffic for all the services in a Mesh.

Kuma ships with a builtin CA (Certificate Authority) which is initialized with an auto-generated root certificate. The root certificate is unique for every Mesh and it used to sign identity certificates for every data-plane.

The mTLS feature is used for AuthN/Z as well: each data-plane is being assigned with a workload identity certificate, which is SPIFFE compatible. This certificate has a SAN set to spiffe://<mesh name>/<service name>. When Kuma enforces policies that require an identity, like TrafficPermission, it will extract the SAN from the client certificate and use it for every identity matching operation.

By default, mTLS is not enabled. You can enable Mutual TLS by updating the Mesh policy with the mtls setting.

On Universal:

  1. type: Mesh
  2. name: default
  3. mtls:
  4. enabled: true
  5. ca:
  6. builtin: {}

You can apply this configuration with kumactl apply -f [file-path].

On Kubernetes:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: Mesh
  3. metadata:
  4. namespace: kuma-system
  5. name: default
  6. spec:
  7. mtls:
  8. enabled: true
  9. ca:
  10. builtin: {}

You can apply this configuration with kubectl apply -f [file-path].

Currently Kuma only support self-signed certificates (builtin). In the future we plan to add support for third-party Certificate Authorities.

Traffic Permissions

Traffic Permissions allow you to determine security rules for services that consume other services via their Tags. It is a very useful policy to increase security in the Mesh and compliance in the organization.

You can determine what source services are allowed to consume specific destination services. The service field is mandatory in both sources and destinations.

In Kuma 0.1.0 the sources field only allows for service and only service will be enforced. This limitation will disappear in the next version of Kuma.

In the example below, the destinations includes not only the service property, but also an additional version tag. You can include any arbitrary tags to any Dataplane

On Universal:

  1. type: TrafficPermission
  2. name: permission-1
  3. mesh: default
  4. rules:
  5. - sources:
  6. - match:
  7. service: backend
  8. destinations:
  9. - match:
  10. service: redis
  11. version: "5.0"

On Kubernetes:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: TrafficPermission
  3. mesh: default
  4. metadata:
  5. namespace: default
  6. name: permission-1
  7. spec:
  8. rules:
  9. - sources:
  10. - match:
  11. service: backend
  12. destinations:
  13. - match:
  14. service: redis
  15. version: "5.0"

Match-All: You can match any value of a tag by using , like version: .

Traffic Route

This is a proposed policy not in GA yet. You can setup routing manually by leveraging the ProxyTemplate policy and the low-level Envoy configuration. Join us on Slack to share your routing requirements.

The proposed policy will enable a new TrafficRoute policy that can be used to configure both simple and more sophisticated routing rules on the traffic, like blue/green deployments and canary releases.

On Universal:

  1. type: TrafficRoute
  2. name: route-1
  3. mesh: default
  4. rules:
  5. - sources:
  6. - match:
  7. service: backend
  8. destinations:
  9. - match:
  10. service: redis
  11. conf:
  12. - weight: 90
  13. destination:
  14. - service: backend
  15. version: "1.0"
  16. - weight: 10
  17. destination:
  18. - service: backend
  19. version: "2.0"

On Kubernetes:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: TrafficRoute
  3. mesh: default
  4. metadata:
  5. namespace: default
  6. name: route-1
  7. spec:
  8. rules:
  9. - sources:
  10. - match:
  11. service: backend
  12. destinations:
  13. - match:
  14. service: redis
  15. conf:
  16. - weight: 90
  17. destination:
  18. - service: backend
  19. version: "1.0"
  20. - weight: 10
  21. destination:
  22. - service: backend
  23. version: "2.0"

Traffic Tracing

This is a proposed policy not in GA yet. You can setup tracing manually by leveraging the ProxyTemplate policy and the low-level Envoy configuration. Join us on Slack to share your tracing requirements.

The proposed policy will enable tracing on the Mesh level by adding a tracing field.

On Universal:

  1. type: Mesh
  2. name: default
  3. tracing:
  4. enabled: true
  5. type: zipkin
  6. address: zipkin.srv:9000

On Kubernetes:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: Mesh
  3. metadata:
  4. namespace: kuma-system
  5. name: default
  6. spec:
  7. tracing:
  8. enabled: true
  9. type: zipkin
  10. address: zipkin.srv:9000

Traffic Logging

With the TrafficLogging policy you can configure access logging on every Envoy data-plane belonging to the Mesh. These logs can then be collected by any agent to be inserted into systems like Splunk, ELK and Datadog.

On Universal:

  1. type: Mesh
  2. name: default
  3. logging:
  4. accessLogs:
  5. enabled: true
  6. filePath: "/tmp/access.log"

On Kubernetes:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: Mesh
  3. metadata:
  4. namespace: kuma-system
  5. name: default
  6. spec:
  7. logging:
  8. accessLogs:
  9. enabled: true
  10. filePath: "/tmp/access.log"

Proxy Template

With the ProxyTemplate policy you can configure the low-level Envoy resources directly. The policy requires two elements in its configuration:

  • imports: this field lets you import canned ProxyTemplates provided by Kuma.
    • In the current release, the only available canned ProxyTemplate is default-proxy
    • In future releases, more of these will be available and it will also be possible for the user to define them to re-use across their infrastructure
  • resources: the custom resources that will be applied to every Dataplane that matches the selectors. On Universal:
  1. type: ProxyTemplate
  2. mesh: default
  3. name: template-1
  4. selectors:
  5. - match:
  6. service: backend
  7. conf:
  8. imports:
  9. - default-proxy
  10. resources:
  11. - ..
  12. - ..

On Kubernetes:

  1. apiVersion: kuma.io/v1alpha1
  2. kind: ProxyTemplate
  3. mesh: default
  4. metadata:
  5. namespace: default
  6. name: template-1
  7. selectors:
  8. - match:
  9. service: backend
  10. conf:
  11. imports:
  12. - default-proxy
  13. resources:
  14. - ..
  15. - ..

Below you can find an example of what a ProxyTemplate configuration could look like:

  1. imports:
  2. - default-proxy
  3. resources:
  4. - name: localhost:9901
  5. version: v1
  6. resource: |
  7. '@type': type.googleapis.com/envoy.api.v2.Cluster
  8. connectTimeout: 5s
  9. name: localhost:9901
  10. loadAssignment:
  11. clusterName: localhost:9901
  12. endpoints:
  13. - lbEndpoints:
  14. - endpoint:
  15. address:
  16. socketAddress:
  17. address: 127.0.0.1
  18. portValue: 9901
  19. type: STATIC
  20. - name: inbound:0.0.0.0:4040
  21. version: v1
  22. resource: |
  23. '@type': type.googleapis.com/envoy.api.v2.Listener
  24. name: inbound:0.0.0.0:4040
  25. address:
  26. socket_address:
  27. address: 0.0.0.0
  28. port_value: 4040
  29. filter_chains:
  30. - filters:
  31. - name: envoy.http_connection_manager
  32. config:
  33. route_config:
  34. virtual_hosts:
  35. - routes:
  36. - match:
  37. prefix: "/stats/prometheus"
  38. route:
  39. cluster: localhost:9901
  40. domains:
  41. - "*"
  42. name: envoy_admin
  43. codec_type: AUTO
  44. http_filters:
  45. name: envoy.router
  46. stat_prefix: stats