Install Istio in Dual-Stack mode

This feature is actively in development and is considered experimental.

Prerequisites

Installation steps

If you want to use kind for your test, you can set up a dual stack cluster with the following command:

  1. $ kind create cluster --name istio-ds --config - <<EOF
  2. kind: Cluster
  3. apiVersion: kind.x-k8s.io/v1alpha4
  4. networking:
  5. ipFamily: dual
  6. EOF

To enable dual-stack for Istio, you will need to modify your IstioOperator or Helm values with the following configuration.

  1. apiVersion: install.istio.io/v1alpha1
  2. kind: IstioOperator
  3. spec:
  4. meshConfig:
  5. defaultConfig:
  6. proxyMetadata:
  7. ISTIO_DUAL_STACK: "true"
  8. values:
  9. pilot:
  10. env:
  11. ISTIO_DUAL_STACK: "true"
  12. # The below values are optional and can be used based on your requirements
  13. gateways:
  14. istio-ingressgateway:
  15. ipFamilyPolicy: RequireDualStack
  16. istio-egressgateway:
  17. ipFamilyPolicy: RequireDualStack
  1. meshConfig:
  2. defaultConfig:
  3. proxyMetadata:
  4. ISTIO_DUAL_STACK: "true"
  5. values:
  6. pilot:
  7. env:
  8. ISTIO_DUAL_STACK: "true"
  9. # The below values are optional and can be used based on your requirements
  10. gateways:
  11. istio-ingressgateway:
  12. ipFamilyPolicy: RequireDualStack
  13. istio-egressgateway:
  14. ipFamilyPolicy: RequireDualStack
  1. $ istioctl install --set values.pilot.env.ISTIO_DUAL_STACK=true --set meshConfig.defaultConfig.proxyMetadata.ISTIO_DUAL_STACK="true" --set values.gateways.istio-ingressgateway.ipFamilyPolicy=RequireDualStack --set values.gateways.istio-egressgateway.ipFamilyPolicy=RequireDualStack -y

Verification

  1. Create three namespaces:

    • dual-stack: tcp-echo will listen on both an IPv4 and IPv6 address.
    • ipv4: tcp-echo will listen on only an IPv4 address.
    • ipv6: tcp-echo will listen on only an IPv6 address.
    1. $ kubectl create namespace dual-stack
    2. $ kubectl create namespace ipv4
    3. $ kubectl create namespace ipv6
  2. Enable sidecar injection on all of those namespaces as well as the default namespace:

    1. $ kubectl label --overwrite namespace default istio-injection=enabled
    2. $ kubectl label --overwrite namespace dual-stack istio-injection=enabled
    3. $ kubectl label --overwrite namespace ipv4 istio-injection=enabled
    4. $ kubectl label --overwrite namespace ipv6 istio-injection=enabled
  3. Create tcp-echo deployments in the namespaces:

    ZipZipZip

    1. $ kubectl apply --namespace dual-stack -f @samples/tcp-echo/tcp-echo-dual-stack.yaml@
    2. $ kubectl apply --namespace ipv4 -f @samples/tcp-echo/tcp-echo-ipv4.yaml@
    3. $ kubectl apply --namespace ipv6 -f @samples/tcp-echo/tcp-echo-ipv6.yaml@
  4. Deploy the sleep sample app to use as a test source for sending requests.

    Zip

    1. $ kubectl apply -f @samples/sleep/sleep.yaml@
  5. Verify the traffic reaches the dual-stack pods:

    1. $ kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo dualstack | nc tcp-echo.dual-stack 9000"
    2. hello dualstack
  6. Verify the traffic reaches the IPv4 pods:

    1. $ kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv4 | nc tcp-echo.ipv4 9000"
    2. hello ipv4
  7. Verify the traffic reaches the IPv6 pods:

    1. $ kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}')" -- sh -c "echo ipv6 | nc tcp-echo.ipv6 9000"
    2. hello ipv6
  8. Verify the envoy listeners:

    1. $ istioctl proxy-config listeners "$(kubectl get pod -n dual-stack -l app=tcp-echo -o jsonpath='{.items[0].metadata.name}')" -n dual-stack --port 9000

    You will see listeners are now bound to multiple addresses, but only for dual stack services. Other services will only be listening on a single IP address.

    1. "name": "fd00:10:96::f9fc_9000",
    2. "address": {
    3. "socketAddress": {
    4. "address": "fd00:10:96::f9fc",
    5. "portValue": 9000
    6. }
    7. },
    8. "additionalAddresses": [
    9. {
    10. "address": {
    11. "socketAddress": {
    12. "address": "10.96.106.11",
    13. "portValue": 9000
    14. }
    15. }
    16. }
    17. ],
  9. Verify virtual inbound addresses are configured to listen on both 0.0.0.0 and [::].

    1. "name": "virtualInbound",
    2. "address": {
    3. "socketAddress": {
    4. "address": "0.0.0.0",
    5. "portValue": 15006
    6. }
    7. },
    8. "additionalAddresses": [
    9. {
    10. "address": {
    11. "socketAddress": {
    12. "address": "::",
    13. "portValue": 15006
    14. }
    15. }
    16. }
    17. ],
  10. Verify envoy endpoints are configured to route to both IPv4 and IPv6:

    1. $ istioctl proxy-config endpoints "$(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}')" --port 9000
    2. ENDPOINT STATUS OUTLIER CHECK CLUSTER
    3. 10.244.0.19:9000 HEALTHY OK outbound|9000||tcp-echo.ipv4.svc.cluster.local
    4. 10.244.0.26:9000 HEALTHY OK outbound|9000||tcp-echo.dual-stack.svc.cluster.local
    5. fd00:10:244::1a:9000 HEALTHY OK outbound|9000||tcp-echo.dual-stack.svc.cluster.local
    6. fd00:10:244::18:9000 HEALTHY OK outbound|9000||tcp-echo.ipv6.svc.cluster.local

Now you can experiment with dual-stack services in your environment!

Cleanup

  1. Cleanup application namespaces and deployments

    Zip

    1. $ kubectl delete -f @samples/sleep/sleep.yaml@
    2. $ kubectl delete ns dual-stack ipv4 ipv6