Provisioning Identity through SDS

This task shows how to enableSDS (secret discovery service)for Istio identity provisioning.

By default, the keys and certificates of Istio workloads are generatedby Citadel and distributed to sidecars through secret-volume mounted files.This approach has the following minor drawbacks:

  • Performance regression during certificate rotation:When certificate rotation happens, Envoy is hot restarted to pick up the newkey and certificate, causing performance regression.

  • Potential security vulnerability:The workload private keys are distributed through Kubernetes secrets,with knownrisks.

These issues can be addressed by enabling the SDS identity provision flow.This workflow can be described as follows:

  • The workload sidecar Envoy requests the key and certificates from the Citadelagent: The Citadel agent is a SDS server, which runs as per-node DaemonSet.In the request, Envoy passes a Kubernetes service account JWT to the agent.

  • The Citadel agent generates a key pair and sends the CSR request to Citadel:Citadel verifies the JWT and issues the certificate to the Citadel agent.

  • The Citadel agent sends the key and certificate back to the workload sidecar.

The SDS approach has the following benefits:

  • The private key never leaves the node: It is only in the Citadel agentand Envoy sidecar’s memory.

  • The secret volume mount is no longer needed: The reliance on the Kubernetessecrets is eliminated.

  • The sidecar Envoy is able to dynamically renew the key and certificatethrough the SDS API: Certificate rotations no longer require Envoy to restart.

Before you begin

Follow the Istio installation guide to set up Istio with SDS and global mutual TLS enabled.

Service-to-service mutual TLS using key/certificate provisioned through SDS

Follow the authentication policy task tosetup test services.

ZipZipZipZip

  1. $ kubectl create ns foo
  2. $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo
  3. $ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n foo
  4. $ kubectl create ns bar
  5. $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n bar
  6. $ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n bar

Verify all mutual TLS requests succeed:

  1. $ for from in "foo" "bar"; do for to in "foo" "bar"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
  2. sleep.foo to httpbin.foo: 200
  3. sleep.foo to httpbin.bar: 200
  4. sleep.bar to httpbin.foo: 200
  5. sleep.bar to httpbin.bar: 200

Verifying no secret-volume mounted file is generated

To verify that no secret-volume mounted file is generated, access the deployedworkload sidecar container:

  1. $ kubectl exec -it $(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name}) -c istio-proxy -n foo -- /bin/bash

As you can see there is no secret file mounted at /etc/certs folder.

Securing SDS with pod security policies

The Istio Secret Discovery Service (SDS) uses the Citadel agent to distribute the certificate to theEnvoy sidecar via a Unix domain socket. All pods running in the same Kubernetes node share the Citadelagent and Unix domain socket.

To prevent unexpected modifications to the Unix domain socket, enable the pod security policyto restrict the pod’s permission on the Unix domain socket. Otherwise, a malicious user who has thepermission to modify the deployment could hijack the Unix domain socket to break the SDS service orsteal the identity credentials from other pods running on the same Kubernetes node.

To enable the pod security policy, perform the following steps:

  • The Citadel agent fails to start unless it can create the required Unix domain socket. Apply thefollowing pod security policy to only allow the Citadel agent to modify the Unix domain socket:
  1. $ cat <<EOF | kubectl apply -f -
  2. apiVersion: policy/v1beta1
  3. kind: PodSecurityPolicy
  4. metadata:
  5. name: istio-nodeagent
  6. spec:
  7. allowedHostPaths:
  8. - pathPrefix: "/var/run/sds"
  9. seLinux:
  10. rule: RunAsAny
  11. supplementalGroups:
  12. rule: RunAsAny
  13. runAsUser:
  14. rule: RunAsAny
  15. fsGroup:
  16. rule: RunAsAny
  17. volumes:
  18. - '*'
  19. ---
  20. kind: Role
  21. apiVersion: rbac.authorization.k8s.io/v1
  22. metadata:
  23. name: istio-nodeagent
  24. namespace: istio-system
  25. rules:
  26. - apiGroups:
  27. - extensions
  28. resources:
  29. - podsecuritypolicies
  30. resourceNames:
  31. - istio-nodeagent
  32. verbs:
  33. - use
  34. ---
  35. apiVersion: rbac.authorization.k8s.io/v1
  36. kind: RoleBinding
  37. metadata:
  38. name: istio-nodeagent
  39. namespace: istio-system
  40. roleRef:
  41. apiGroup: rbac.authorization.k8s.io
  42. kind: Role
  43. name: istio-nodeagent
  44. subjects:
  45. - kind: ServiceAccount
  46. name: istio-nodeagent-service-account
  47. namespace: istio-system
  48. EOF
  • To stop other pods from modifying the Unix domain socket, change the allowedHostPaths configurationfor the the path the Citadel agent uses for the Unix domain socket to readOnly: true.

The following pod security policy assumes no other pod security policy was applied before. If youalready applied another pod security policy, add the following configuration values to the existingpolicies instead of applying the configuration directly.

  1. $ cat <<EOF | kubectl apply -f -
  2. apiVersion: policy/v1beta1
  3. kind: PodSecurityPolicy
  4. metadata:
  5. name: istio-sds-uds
  6. spec:
  7. # Protect the unix domain socket from unauthorized modification
  8. allowedHostPaths:
  9. - pathPrefix: "/var/run/sds"
  10. readOnly: true
  11. # Allow the istio sidecar injector to work
  12. allowedCapabilities:
  13. - NET_ADMIN
  14. seLinux:
  15. rule: RunAsAny
  16. supplementalGroups:
  17. rule: RunAsAny
  18. runAsUser:
  19. rule: RunAsAny
  20. fsGroup:
  21. rule: RunAsAny
  22. volumes:
  23. - '*'
  24. ---
  25. kind: ClusterRole
  26. apiVersion: rbac.authorization.k8s.io/v1
  27. metadata:
  28. name: istio-sds-uds
  29. rules:
  30. - apiGroups:
  31. - extensions
  32. resources:
  33. - podsecuritypolicies
  34. resourceNames:
  35. - istio-sds-uds
  36. verbs:
  37. - use
  38. ---
  39. apiVersion: rbac.authorization.k8s.io/v1
  40. kind: ClusterRoleBinding
  41. metadata:
  42. name: istio-sds-uds
  43. roleRef:
  44. apiGroup: rbac.authorization.k8s.io
  45. kind: ClusterRole
  46. name: istio-sds-uds
  47. subjects:
  48. - apiGroup: rbac.authorization.k8s.io
  49. kind: Group
  50. name: system:serviceaccounts
  51. EOF
  • Enable pod security policies for your platform. Each supported platform enables pod securitypolicies differently. Please refer to the pertinent documentation for your platform. If you areusing the Google Kubernetes Engine (GKE), you must enable the pod security policy controller.

Grant all needed permissions in the pod security policy before enabling it. Once the policy isenabled, pods won’t start if they require any permissions not granted.

  • Run the following command to restart the Citadel agents:
  1. $ kubectl delete pod -l 'app=istio-nodeagent' -n istio-system
  2. pod "istio-nodeagent-dplx2" deleted
  3. pod "istio-nodeagent-jrbmx" deleted
  4. pod "istio-nodeagent-rz878" deleted
  • To verify that the Citadel agents work with the enabled pod security policy, wait a few secondsand run the following command to confirm the agents started successfully:
  1. $ kubectl get pod -l 'app=istio-nodeagent' -n istio-system
  2. NAME READY STATUS RESTARTS AGE
  3. istio-nodeagent-p4p7g 1/1 Running 0 4s
  4. istio-nodeagent-qdwj6 1/1 Running 0 5s
  5. istio-nodeagent-zsk2b 1/1 Running 0 14s
  • Run the following command to start a normal pod.
  1. $ cat <<EOF | kubectl apply -f -
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: normal
  6. spec:
  7. replicas: 1
  8. selector:
  9. matchLabels:
  10. app: normal
  11. template:
  12. metadata:
  13. labels:
  14. app: normal
  15. spec:
  16. containers:
  17. - name: normal
  18. image: pstauffer/curl
  19. command: ["/bin/sleep", "3650d"]
  20. imagePullPolicy: IfNotPresent
  21. EOF
  • To verify that the normal pod works with the pod security policy enabled, wait a few seconds andrun the following command to confirm the normal pod started successfully.
  1. $ kubectl get pod -l 'app=normal'
  2. NAME READY STATUS RESTARTS AGE
  3. normal-64c6956774-ptpfh 2/2 Running 0 8s
  • Start a malicious pod that tries to mount the Unix domain socket using a write permission.
  1. $ cat <<EOF | kubectl apply -f -
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: malicious
  6. spec:
  7. replicas: 1
  8. selector:
  9. matchLabels:
  10. app: malicious
  11. template:
  12. metadata:
  13. labels:
  14. app: malicious
  15. spec:
  16. containers:
  17. - name: malicious
  18. image: pstauffer/curl
  19. command: ["/bin/sleep", "3650d"]
  20. imagePullPolicy: IfNotPresent
  21. volumeMounts:
  22. - name: sds-uds
  23. mountPath: /var/run/sds
  24. volumes:
  25. - name: sds-uds
  26. hostPath:
  27. path: /var/run/sds
  28. type: ""
  29. EOF
  • To verify that the Unix domain socket is protected, run the following command to confirm themalicious pod failed to start due to the pod security policy:
  1. $ kubectl describe rs -l 'app=malicious' | grep Failed
  2. Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
  3. ReplicaFailure True FailedCreate
  4. Warning FailedCreate 4s (x13 over 24s) replicaset-controller Error creating: pods "malicious-7dcfb8d648-" is forbidden: unable to validate against any pod security policy: [spec.containers[0].volumeMounts[0].readOnly: Invalid value: false: must be read-only]

Cleanup

  • Clean up the test services and the Istio control plane:
  1. $ kubectl delete ns foo
  2. $ kubectl delete ns bar
  3. $ kubectl delete -f istio-auth-sds.yaml
  • Disable the pod security policy in the cluster using the documentation of your platform. If you are using GKE,disable the pod security policy controller.

  • Delete the pod security policy and the test deployments:

  1. $ kubectl delete psp istio-sds-uds istio-nodeagent
  2. $ kubectl delete role istio-nodeagent -n istio-system
  3. $ kubectl delete rolebinding istio-nodeagent -n istio-system
  4. $ kubectl delete clusterrole istio-sds-uds
  5. $ kubectl delete clusterrolebinding istio-sds-uds
  6. $ kubectl delete deploy malicious
  7. $ kubectl delete deploy normal

Caveats

Currently, the SDS identity provision flow has the following caveats:

  • SDS support is currently in Alpha.

  • Smoothly migrating a cluster from using secret volume mount to usingSDS is a work in progress.

See also

DNS Certificate Management

Provision and manage DNS certificates in Istio.

Introducing the Istio v1beta1 Authorization Policy

Introduction, motivation and design principles for the Istio v1beta1 Authorization Policy.

Secure Webhook Management

A more secure way to manage Istio webhooks.

Multi-Mesh Deployments for Isolation and Boundary Protection

Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

App Identity and Access Adapter

Using Istio to secure multi-cloud Kubernetes applications with zero code changes.

Change in Secret Discovery Service in Istio 1.3

Taking advantage of Kubernetes trustworthy JWTs to issue certificates for workload instances more securely.