SPIRE

SPIRE is a production-ready implementation of the SPIFFE specification that performs node and workload attestation in order to securely issue cryptographic identities to workloads running in heterogeneous environments. SPIRE can be configured as a source of cryptographic identities for Istio workloads through an integration with Envoy’s SDS API. Istio can detect the existence of a UNIX Domain Socket that implements the Envoy SDS API on a defined socket path, allowing Envoy to communicate and fetch identities directly from it.

This integration with SPIRE provides flexible attestation options not available with the default Istio identity management while harnessing Istio’s powerful service management. For example, SPIRE’s plugin architecture enables diverse workload attestation options beyond the Kubernetes namespace and service account attestation offered by Istio. SPIRE’s node attestation extends attestation to the physical or virtual hardware on which workloads run.

For a quick demo of how this SPIRE integration with Istio works, see Integrating SPIRE as a CA through Envoy’s SDS API.

Note that this integration requires version 1.14+ for both istioctl and the data plane.

The integration is compatible with Istio upgrades.

Install SPIRE

Option 1: Quick start

Istio provides a basic sample installation to quickly get SPIRE up and running:

Zip

  1. $ kubectl apply -f @samples/security/spire/spire-quickstart.yaml@

This will deploy SPIRE into your cluster, along with two additional components: the SPIFFE CSI Driver — used to share the SPIRE Agent’s UNIX Domain Socket with the other pods throughout the node — and the SPIRE Controller Manager, a facilitator that performs workload registration and establishes federation relationships within Kubernetes. See Install Istio to configure Istio and integrate with the SPIFFE CSI Driver.

Option 2: Configure a custom SPIRE installation

See the SPIRE’s Quick start for Kubernetes guide to get started deploying SPIRE into your Kubernetes environment. See SPIRE CA Integration Prerequisites for more information on configuring SPIRE to integrate with Istio deployments.

SPIRE CA Integration Prerequisites

To integrate your SPIRE deployment with Istio, configure SPIRE:

  1. Access the SPIRE Agent reference and configure the SPIRE Agent socket path to match the Envoy SDS defined socket path.

    1. socket_path = "/run/secrets/workload-spiffe-uds/socket"
  2. Share the SPIRE Agent socket with the pods within the node by deploying the SPIFFE CSI Driver. The -workload-api-socket-dir argument to the driver should be the mount location of the socket’s directory.

See Install Istio to configure Istio to integrate with the SPIFFE CSI Driver.

Istio will become the Envoy SDS listener if the socket is not created by SPIRE before the Istio agent starts up. This timing is controlled by customizing the IstioOperator.

Install Istio

Option 1: Configuration for Workload Registration with the SPIRE Controller Manager

By deploying SPIRE Controller Manager along with a SPIRE Server, new entries can be automatically registered for each new pod that matches the selector defined in a ClusterSPIFFEID custom resource.

A ClusterSPIFFEID must be applied prior to installing Istio in order for the Ingress-gateway to obtain its certificates. Additionally, the Ingress-gateway pod must be configured to match the selector defined in the ClusterSPIFFEID. If a registration entry for the Ingress Gateway workload was not automatically created during install, the workload would not reach a Ready state and installation would fail.

  1. Create example ClusterSPIFFEID:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: spire.spiffe.io/v1alpha1
    3. kind: ClusterSPIFFEID
    4. metadata:
    5. name: example
    6. spec:
    7. spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
    8. podSelector:
    9. matchLabels:
    10. spiffe.io/spire-managed-identity: "true"
    11. EOF

    The example ClusterSPIFFEID enables automatic workload registration for all workloads with the spiffe.io/spire-managed-identity: "true" label. For pods with this label, the values specified in the spiffeIDTemplate will be extracted to form the SPIFFE ID.

  2. Download the Istio release.

  3. Create the Istio configuration with custom patches for the Ingress-gateway and istio-proxy. The Ingress Gateway component includes the spiffe.io/spire-managed-identity: "true" label.

    1. $ cat <<EOF > ./istio.yaml
    2. apiVersion: install.istio.io/v1alpha1
    3. kind: IstioOperator
    4. metadata:
    5. namespace: istio-system
    6. spec:
    7. profile: default
    8. meshConfig:
    9. trustDomain: example.org
    10. values:
    11. global:
    12. # This is used to customize the sidecar template
    13. sidecarInjectorWebhook:
    14. templates:
    15. spire: |
    16. spec:
    17. containers:
    18. - name: istio-proxy
    19. volumeMounts:
    20. - name: workload-socket
    21. mountPath: /run/secrets/workload-spiffe-uds
    22. readOnly: true
    23. volumes:
    24. - name: workload-socket
    25. csi:
    26. driver: "csi.spiffe.io"
    27. readOnly: true
    28. components:
    29. ingressGateways:
    30. - name: istio-ingressgateway
    31. enabled: true
    32. label:
    33. istio: ingressgateway
    34. spiffe.io/spire-managed-identity: "true"
    35. k8s:
    36. overlays:
    37. - apiVersion: apps/v1
    38. kind: Deployment
    39. name: istio-ingressgateway
    40. patches:
    41. - path: spec.template.spec.volumes.[name:workload-socket]
    42. value:
    43. name: workload-socket
    44. csi:
    45. driver: "csi.spiffe.io"
    46. readOnly: true
    47. - path: spec.template.spec.containers.[name:istio-proxy].volumeMounts.[name:workload-socket]
    48. value:
    49. name: workload-socket
    50. mountPath: "/run/secrets/workload-spiffe-uds"
    51. readOnly: true
    52. - path: spec.template.spec.initContainers
    53. value:
    54. - name: wait-for-spire-socket
    55. image: busybox:1.28
    56. volumeMounts:
    57. - name: workload-socket
    58. mountPath: /run/secrets/workload-spiffe-uds
    59. readOnly: true
    60. env:
    61. - name: CHECK_FILE
    62. value: /run/secrets/workload-spiffe-uds/socket
    63. command:
    64. - sh
    65. - "-c"
    66. - |-
    67. echo "$(date -Iseconds)" Waiting for: ${CHECK_FILE}
    68. while [[ ! -e ${CHECK_FILE} ]] ; do
    69. echo "$(date -Iseconds)" File does not exist: ${CHECK_FILE}
    70. sleep 15
    71. done
    72. ls -l ${CHECK_FILE}
    73. EOF
  4. Apply the configuration:

    1. $ istioctl install --skip-confirmation -f ./istio.yaml
  5. Check Ingress-gateway pod state:

    1. $ kubectl get pods -n istio-system
    2. NAME READY STATUS RESTARTS AGE
    3. istio-ingressgateway-5b45864fd4-lgrxs 1/1 Running 0 17s
    4. istiod-989f54d9c-sg7sn 1/1 Running 0 23s

    The Ingress-gateway pod is Ready since the corresponding registration entry is automatically created for it on the SPIRE Server. Envoy is able to fetch cryptographic identities from SPIRE.

Note that SPIRE Controller Manager is used in the quick start section.

Option 2: Configuration for Manual Workload Registration with SPIRE

  1. Download the Istio release.

  2. After deploying SPIRE into your environment, and verifying that all deployments are in Ready state, configure Istio with custom patches for the Ingress-gateway as well as for istio-proxy.

    Create Istio configuration:

    1. $ cat <<EOF > ./istio.yaml
    2. apiVersion: install.istio.io/v1alpha1
    3. kind: IstioOperator
    4. metadata:
    5. namespace: istio-system
    6. spec:
    7. profile: default
    8. meshConfig:
    9. trustDomain: example.org
    10. values:
    11. global:
    12. # This is used to customize the sidecar template
    13. sidecarInjectorWebhook:
    14. templates:
    15. spire: |
    16. spec:
    17. containers:
    18. - name: istio-proxy
    19. volumeMounts:
    20. - name: workload-socket
    21. mountPath: /run/secrets/workload-spiffe-uds
    22. readOnly: true
    23. volumes:
    24. - name: workload-socket
    25. csi:
    26. driver: "csi.spiffe.io"
    27. readOnly: true
    28. components:
    29. ingressGateways:
    30. - name: istio-ingressgateway
    31. enabled: true
    32. label:
    33. istio: ingressgateway
    34. k8s:
    35. overlays:
    36. - apiVersion: apps/v1
    37. kind: Deployment
    38. name: istio-ingressgateway
    39. patches:
    40. - path: spec.template.spec.volumes.[name:workload-socket]
    41. value:
    42. name: workload-socket
    43. csi:
    44. driver: "csi.spiffe.io"
    45. readOnly: true
    46. - path: spec.template.spec.containers.[name:istio-proxy].volumeMounts.[name:workload-socket]
    47. value:
    48. name: workload-socket
    49. mountPath: "/run/secrets/workload-spiffe-uds"
    50. readOnly: true
    51. - path: spec.template.spec.initContainers
    52. value:
    53. - name: wait-for-spire-socket
    54. image: busybox:1.28
    55. volumeMounts:
    56. - name: workload-socket
    57. mountPath: /run/secrets/workload-spiffe-uds
    58. readOnly: true
    59. env:
    60. - name: CHECK_FILE
    61. value: /run/secrets/workload-spiffe-uds/socket
    62. command:
    63. - sh
    64. - "-c"
    65. - |-
    66. echo "$(date -Iseconds)" Waiting for: ${CHECK_FILE}
    67. while [[ ! -e ${CHECK_FILE} ]] ; do
    68. echo "$(date -Iseconds)" File does not exist: ${CHECK_FILE}
    69. sleep 15
    70. done
    71. ls -l ${CHECK_FILE}
    72. EOF
  3. Apply the configuration:

    1. $ istioctl install --skip-confirmation -f ./istio.yaml
  4. Check Ingress-gateway pod state:

    1. $ kubectl get pods -n istio-system
    2. NAME READY STATUS RESTARTS AGE
    3. istio-ingressgateway-5b45864fd4-lgrxs 0/1 Running 0 20s
    4. istiod-989f54d9c-sg7sn 1/1 Running 0 25s

    The Ingress-gateway pod and data plane containers will only reach Ready if a corresponding registration entry is created for them on the SPIRE Server. Then, Envoy will be able to fetch cryptographic identities from SPIRE. See Register workloads to register entries for services in your mesh.

The Istio configuration shares the spiffe-csi-driver with the Ingress Gateway and the sidecars that are going to be injected on workload pods, granting them access to the SPIRE Agent’s UNIX Domain Socket.

This configuration also adds an initContainer to the gateway that will wait for SPIRE to create the UNIX Domain Socket before starting the istio-proxy. If the SPIRE agent is not ready or has not been properly configured with the same socket path, the Ingress Gateway initContainer will wait forever.

Register workloads

This section describes the options available for registering workloads in a SPIRE Server.

Option 1: Registration using the SPIRE Controller Manager

New entries will be automatically registered for each new pod that matches the selector defined in a ClusterSPIFFEID custom resource. See Configuration for Workload Registration with the SPIRE Controller Manager for the example ClusterSPIFFEID configuration.

  1. Deploy an example workload:

    Zip

    1. $ istioctl kube-inject --filename @samples/security/spire/sleep-spire.yaml@ | kubectl apply -f -

    In addition to needing spiffe.io/spire-managed-identity label, the workload will need the SPIFFE CSI Driver volume to access the SPIRE Agent socket. To accomplish this, you can leverage the spire pod annotation template from the Install Istio section or add the CSI volume to the deployment spec of your workload. Both of these alternatives are highlighted on the example snippet below:

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: sleep
    5. spec:
    6. replicas: 1
    7. selector:
    8. matchLabels:
    9. app: sleep
    10. template:
    11. metadata:
    12. labels:
    13. app: sleep
    14. spiffe.io/spire-managed-identity: "true"
    15. # Injects custom sidecar template
    16. annotations:
    17. inject.istio.io/templates: "sidecar,spire"
    18. spec:
    19. terminationGracePeriodSeconds: 0
    20. serviceAccountName: sleep
    21. containers:
    22. - name: sleep
    23. image: curlimages/curl
    24. command: ["/bin/sleep", "3650d"]
    25. imagePullPolicy: IfNotPresent
    26. volumeMounts:
    27. - name: tmp
    28. mountPath: /tmp
    29. securityContext:
    30. runAsUser: 1000
    31. volumes:
    32. - name: tmp
    33. emptyDir: {}
    34. # CSI volume
    35. - name: workload-socket
    36. csi:
    37. driver: "csi.spiffe.io"
    38. readOnly: true

See Verifying that identities were created for workloads to check issued identities.

Note that SPIRE Controller Manager is used in the quick start section.

Option 2: Manual Registration

To improve workload attestation security robustness, SPIRE is able to verify against a group of selector values based on different parameters. Skip these steps if you installed SPIRE by following the quick start since it uses automatic registration.

  1. Generate an entry for an Ingress Gateway with a set of selectors such as the pod name and pod UID:

    1. $ INGRESS_POD=$(kubectl get pod -l istio=ingressgateway -n istio-system -o jsonpath="{.items[0].metadata.name}")
    2. $ INGRESS_POD_UID=$(kubectl get pods -n istio-system "$INGRESS_POD" -o jsonpath='{.metadata.uid}')
  2. Get the spire-server pod:

    1. $ SPIRE_SERVER_POD=$(kubectl get pod -l app=spire-server -n spire -o jsonpath="{.items[0].metadata.name}")
  3. Register an entry for the SPIRE Agent running on the node:

    1. $ kubectl exec -n spire "$SPIRE_SERVER_POD" -- \
    2. /opt/spire/bin/spire-server entry create \
    3. -spiffeID spiffe://example.org/ns/spire/sa/spire-agent \
    4. -selector k8s_psat:cluster:demo-cluster \
    5. -selector k8s_psat:agent_ns:spire \
    6. -selector k8s_psat:agent_sa:spire-agent \
    7. -node -socketPath /run/spire/sockets/server.sock
    8. Entry ID : d38c88d0-7d7a-4957-933c-361a0a3b039c
    9. SPIFFE ID : spiffe://example.org/ns/spire/sa/spire-agent
    10. Parent ID : spiffe://example.org/spire/server
    11. Revision : 0
    12. TTL : default
    13. Selector : k8s_psat:agent_ns:spire
    14. Selector : k8s_psat:agent_sa:spire-agent
    15. Selector : k8s_psat:cluster:demo-cluster
  4. Register an entry for the Ingress-gateway pod:

    1. $ kubectl exec -n spire "$SPIRE_SERVER_POD" -- \
    2. /opt/spire/bin/spire-server entry create \
    3. -spiffeID spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account \
    4. -parentID spiffe://example.org/ns/spire/sa/spire-agent \
    5. -selector k8s:sa:istio-ingressgateway-service-account \
    6. -selector k8s:ns:istio-system \
    7. -selector k8s:pod-uid:"$INGRESS_POD_UID" \
    8. -dns "$INGRESS_POD" \
    9. -dns istio-ingressgateway.istio-system.svc \
    10. -socketPath /run/spire/sockets/server.sock
    11. Entry ID : 6f2fe370-5261-4361-ac36-10aae8d91ff7
    12. SPIFFE ID : spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account
    13. Parent ID : spiffe://example.org/ns/spire/sa/spire-agent
    14. Revision : 0
    15. TTL : default
    16. Selector : k8s:ns:istio-system
    17. Selector : k8s:pod-uid:63c2bbf5-a8b1-4b1f-ad64-f62ad2a69807
    18. Selector : k8s:sa:istio-ingressgateway-service-account
    19. DNS name : istio-ingressgateway.istio-system.svc
    20. DNS name : istio-ingressgateway-5b45864fd4-lgrxs
  5. Deploy an example workload:

    Zip

    1. $ istioctl kube-inject --filename @samples/security/spire/sleep-spire.yaml@ | kubectl apply -f -

    Note that the workload will need the SPIFFE CSI Driver volume to access the SPIRE Agent socket. To accomplish this, you can leverage the spire pod annotation template from the Install Istio section or add the CSI volume to the deployment spec of your workload. Both of these alternatives are highlighted on the example snippet below:

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: sleep
    5. spec:
    6. replicas: 1
    7. selector:
    8. matchLabels:
    9. app: sleep
    10. template:
    11. metadata:
    12. labels:
    13. app: sleep
    14. # Injects custom sidecar template
    15. annotations:
    16. inject.istio.io/templates: "sidecar,spire"
    17. spec:
    18. terminationGracePeriodSeconds: 0
    19. serviceAccountName: sleep
    20. containers:
    21. - name: sleep
    22. image: curlimages/curl
    23. command: ["/bin/sleep", "3650d"]
    24. imagePullPolicy: IfNotPresent
    25. volumeMounts:
    26. - name: tmp
    27. mountPath: /tmp
    28. securityContext:
    29. runAsUser: 1000
    30. volumes:
    31. - name: tmp
    32. emptyDir: {}
    33. # CSI volume
    34. - name: workload-socket
    35. csi:
    36. driver: "csi.spiffe.io"
    37. readOnly: true
  6. Get pod information:

    1. $ SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath="{.items[0].metadata.name}")
    2. $ SLEEP_POD_UID=$(kubectl get pods "$SLEEP_POD" -o jsonpath='{.metadata.uid}')
  7. Register the workload:

    1. $ kubectl exec -n spire "$SPIRE_SERVER_POD" -- \
    2. /opt/spire/bin/spire-server entry create \
    3. -spiffeID spiffe://example.org/ns/default/sa/sleep \
    4. -parentID spiffe://example.org/ns/spire/sa/spire-agent \
    5. -selector k8s:ns:default \
    6. -selector k8s:pod-uid:"$SLEEP_POD_UID" \
    7. -dns "$SLEEP_POD" \
    8. -socketPath /run/spire/sockets/server.sock

SPIFFE IDs for workloads must follow the Istio SPIFFE ID pattern: spiffe://<trust.domain>/ns/<namespace>/sa/<service-account>

See the SPIRE help on Registering workloads to learn how to create new entries for workloads and get them attested using multiple selectors to strengthen attestation criteria.

Verifying that identities were created for workloads

Use the following command to confirm that identities were created for the workloads:

  1. $ kubectl exec -t "$SPIRE_SERVER_POD" -n spire -c spire-server -- ./bin/spire-server entry show
  2. Found 2 entries
  3. Entry ID : c8dfccdc-9762-4762-80d3-5434e5388ae7
  4. SPIFFE ID : spiffe://example.org/ns/istio-system/sa/istio-ingressgateway-service-account
  5. Parent ID : spiffe://example.org/spire/agent/k8s_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687
  6. Revision : 0
  7. X509-SVID TTL : default
  8. JWT-SVID TTL : default
  9. Selector : k8s:pod-uid:88b71387-4641-4d9c-9a89-989c88f7509d
  10. Entry ID : af7b53dc-4cc9-40d3-aaeb-08abbddd8e54
  11. SPIFFE ID : spiffe://example.org/ns/default/sa/sleep
  12. Parent ID : spiffe://example.org/spire/agent/k8s_psat/demo-cluster/bea19580-ae04-4679-a22e-472e18ca4687
  13. Revision : 0
  14. X509-SVID TTL : default
  15. JWT-SVID TTL : default
  16. Selector : k8s:pod-uid:ee490447-e502-46bd-8532-5a746b0871d6

Check the Ingress-gateway pod state:

  1. $ kubectl get pods -n istio-system
  2. NAME READY STATUS RESTARTS AGE
  3. istio-ingressgateway-5b45864fd4-lgrxs 1/1 Running 0 60s
  4. istiod-989f54d9c-sg7sn 1/1 Running 0 45s

After registering an entry for the Ingress-gateway pod, Envoy receives the identity issued by SPIRE and uses it for all TLS and mTLS communications.

Check that the workload identity was issued by SPIRE

  1. Retrieve sleep’s SVID identity document using the istioctl proxy-config secret command:

    1. $ istioctl proxy-config secret "$SLEEP_POD" -o json | jq -r \
    2. '.dynamicActiveSecrets[0].secret.tlsCertificate.certificateChain.inlineBytes' | base64 --decode > chain.pem
  2. Inspect the certificate and verify that SPIRE was the issuer:

    1. $ openssl x509 -in chain.pem -text | grep SPIRE
    2. Subject: C = US, O = SPIRE, CN = sleep-5f4d47c948-njvpk

SPIFFE Federation

SPIRE Servers are able to authenticate SPIFFE identities originating from different trust domains. This is known as SPIFFE federation.

SPIRE Agent can be configured to push federated bundles to Envoy through the Envoy SDS API, allowing Envoy to use validation context to verify peer certificates and trust a workload from another trust domain. To enable Istio to federate SPIFFE identities through SPIRE integration, consult SPIRE Agent SDS configuration and set the following SDS configuration values for your SPIRE Agent configuration file.

ConfigurationDescriptionResource Name
default_svid_nameThe TLS Certificate resource name to use for the default X509-SVID with Envoy SDSdefault
default_bundle_nameThe Validation Context resource name to use for the default X.509 bundle with Envoy SDSnull
default_all_bundles_nameThe Validation Context resource name to use for all bundles (including federated) with Envoy SDSROOTCA

This will allow Envoy to get federated bundles directly from SPIRE.

Create federated registration entries

  • If using the SPIRE Controller Manager, create federated entries for workloads by setting the federatesWith field of the ClusterSPIFFEID CR to the trust domains you want the pod to federate with:

    1. apiVersion: spire.spiffe.io/v1alpha1
    2. kind: ClusterSPIFFEID
    3. metadata:
    4. name: federation
    5. spec:
    6. spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
    7. podSelector:
    8. matchLabels:
    9. spiffe.io/spire-managed-identity: "true"
    10. federatesWith: ["example.io", "example.ai"]
  • For manual registration see Create Registration Entries for Federation.

Cleanup SPIRE

If you installed SPIRE using the quick start SPIRE deployment provided by Istio, use the following commands to remove those Kubernetes resources:

  1. $ kubectl delete CustomResourceDefinition clusterspiffeids.spire.spiffe.io
  2. $ kubectl delete CustomResourceDefinition clusterfederatedtrustdomains.spire.spiffe.io
  3. $ kubectl delete -n spire configmap spire-bundle
  4. $ kubectl delete -n spire serviceaccount spire-agent
  5. $ kubectl delete -n spire configmap spire-agent
  6. $ kubectl delete -n spire daemonset spire-agent
  7. $ kubectl delete csidriver csi.spiffe.io
  8. $ kubectl delete ValidatingWebhookConfiguration spire-controller-manager-webhook
  9. $ kubectl delete -n spire configmap spire-controller-manager-config
  10. $ kubectl delete -n spire configmap spire-server
  11. $ kubectl delete -n spire service spire-controller-manager-webhook-service
  12. $ kubectl delete -n spire service spire-server-bundle-endpoint
  13. $ kubectl delete -n spire service spire-server
  14. $ kubectl delete -n spire serviceaccount spire-server
  15. $ kubectl delete -n spire deployment spire-server
  16. $ kubectl delete clusterrole spire-server-cluster-role spire-agent-cluster-role manager-role
  17. $ kubectl delete clusterrolebinding spire-server-cluster-role-binding spire-agent-cluster-role-binding manager-role-binding
  18. $ kubectl delete -n spire role spire-server-role leader-election-role
  19. $ kubectl delete -n spire rolebinding spire-server-role-binding leader-election-role-binding
  20. $ kubectl delete namespace spire
  21. $ rm istio.yaml chain.pem