Install Typha

Typha sits between the Kubernetes API server and per-node daemons like Felix and confd (running in calico/node). It watches the Kubernetes resources and Calico custom resources used by these daemons, and whenever a resource changes it fans out the update to the daemons. This reduces the number of watches the Kubernetes API server needs to serve and improves scalability of the cluster.

Provision Certificates

We will use mutually authenticated TLS to ensure that calico/node and Typha communicate securely. In this section, we generate a certificate authority (CA) and use it to sign a certificate for Typha.

Create the CA certificate and key

  1. openssl req -x509 -newkey rsa:4096 \
  2. -keyout typhaca.key \
  3. -nodes \
  4. -out typhaca.crt \
  5. -subj "/CN=Calico Typha CA" \
  6. -days 365

Store the CA certificate in a ConfigMap that Typha & calico/node will access.

  1. kubectl create configmap -n kube-system calico-typha-ca --from-file=typhaca.crt

Create the Typha key and certificate signing request (CSR)

  1. openssl req -newkey rsa:4096 \
  2. -keyout typha.key \
  3. -nodes \
  4. -out typha.csr \
  5. -subj "/CN=calico-typha"

The certificate presents the Common Name (CN) as calico-typha. calico/node will be configured to verify this name.

Sign the Typha certificate with the CA

  1. openssl x509 -req -in typha.csr \
  2. -CA typhaca.crt \
  3. -CAkey typhaca.key \
  4. -CAcreateserial \
  5. -out typha.crt \
  6. -days 365

Store the Typha key and certificate in a secret that Typha will access

  1. kubectl create secret generic -n kube-system calico-typha-certs --from-file=typha.key --from-file=typha.crt

Provision RBAC

Create a ServiceAccount that will be used to run Typha.

  1. kubectl create serviceaccount -n kube-system calico-typha

Define a cluster role for Typha with permission to watch Calico datastore objects.

  1. kubectl apply -f - <<EOF
  2. kind: ClusterRole
  3. apiVersion: rbac.authorization.k8s.io/v1
  4. metadata:
  5. name: calico-typha
  6. rules:
  7. - apiGroups: [""]
  8. resources:
  9. - pods
  10. - namespaces
  11. - serviceaccounts
  12. - endpoints
  13. - services
  14. - nodes
  15. verbs:
  16. - watch
  17. - list
  18. - get
  19. - apiGroups: ["networking.k8s.io"]
  20. resources:
  21. - networkpolicies
  22. verbs:
  23. - watch
  24. - list
  25. - apiGroups: ["crd.projectcalico.org"]
  26. resources:
  27. - globalfelixconfigs
  28. - felixconfigurations
  29. - bgppeers
  30. - globalbgpconfigs
  31. - bgpconfigurations
  32. - ippools
  33. - ipamblocks
  34. - globalnetworkpolicies
  35. - globalnetworksets
  36. - networkpolicies
  37. - clusterinformations
  38. - hostendpoints
  39. - blockaffinities
  40. - networksets
  41. verbs:
  42. - get
  43. - list
  44. - watch
  45. - apiGroups: ["crd.projectcalico.org"]
  46. resources:
  47. #- ippools
  48. #- felixconfigurations
  49. - clusterinformations
  50. verbs:
  51. - get
  52. - create
  53. - update
  54. EOF

Bind the cluster role to the calico-typha ServiceAccount.

  1. kubectl create clusterrolebinding calico-typha --clusterrole=calico-typha --serviceaccount=kube-system:calico-typha

Install Deployment

Since Typha is required by calico/node, and calico/node establishes the pod network, we run Typha as a host networked pod to avoid a chicken-and-egg problem. We run 3 replicas of Typha so that even during a rolling update, a single failure does not make Typha unavailable.

  1. kubectl apply -f - <<EOF
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: calico-typha
  6. namespace: kube-system
  7. labels:
  8. k8s-app: calico-typha
  9. spec:
  10. replicas: 3
  11. revisionHistoryLimit: 2
  12. selector:
  13. matchLabels:
  14. k8s-app: calico-typha
  15. template:
  16. metadata:
  17. labels:
  18. k8s-app: calico-typha
  19. annotations:
  20. cluster-autoscaler.kubernetes.io/safe-to-evict: 'true'
  21. spec:
  22. hostNetwork: true
  23. tolerations:
  24. # Mark the pod as a critical add-on for rescheduling.
  25. - key: CriticalAddonsOnly
  26. operator: Exists
  27. serviceAccountName: calico-typha
  28. priorityClassName: system-cluster-critical
  29. containers:
  30. - image: calico/typha:v3.8.0
  31. name: calico-typha
  32. ports:
  33. - containerPort: 5473
  34. name: calico-typha
  35. protocol: TCP
  36. env:
  37. # Disable logging to file and syslog since those don't make sense in Kubernetes.
  38. - name: TYPHA_LOGFILEPATH
  39. value: "none"
  40. - name: TYPHA_LOGSEVERITYSYS
  41. value: "none"
  42. # Monitor the Kubernetes API to find the number of running instances and rebalance
  43. # connections.
  44. - name: TYPHA_CONNECTIONREBALANCINGMODE
  45. value: "kubernetes"
  46. - name: TYPHA_DATASTORETYPE
  47. value: "kubernetes"
  48. - name: TYPHA_HEALTHENABLED
  49. value: "true"
  50. # Location of the CA bundle Typha uses to authenticate calico/node; volume mount
  51. - name: TYPHA_CAFILE
  52. value: /calico-typha-ca/typhaca.crt
  53. # Common name on the calico/node certificate
  54. - name: TYPHA_CLIENTCN
  55. value: calico-node
  56. # Location of the server certificate for Typha; volume mount
  57. - name: TYPHA_SERVERCERTFILE
  58. value: /calico-typha-certs/typha.crt
  59. # Location of the server certificate key for Typha; volume mount
  60. - name: TYPHA_SERVERKEYFILE
  61. value: /calico-typha-certs/typha.key
  62. livenessProbe:
  63. httpGet:
  64. path: /liveness
  65. port: 9098
  66. host: localhost
  67. periodSeconds: 30
  68. initialDelaySeconds: 30
  69. readinessProbe:
  70. httpGet:
  71. path: /readiness
  72. port: 9098
  73. host: localhost
  74. periodSeconds: 10
  75. volumeMounts:
  76. - name: calico-typha-ca
  77. mountPath: "/calico-typha-ca"
  78. readOnly: true
  79. - name: calico-typha-certs
  80. mountPath: "/calico-typha-certs"
  81. readOnly: true
  82. volumes:
  83. - name: calico-typha-ca
  84. configMap:
  85. name: calico-typha-ca
  86. - name: calico-typha-certs
  87. secret:
  88. secretName: calico-typha-certs
  89. EOF

We set TYPHA_CLIENTCN to calico-node which is the common name we will use on the certificate calico/node will use in the next lab.

Verify Typha is up an running with three instances

  1. kubectl get pods -l k8s-app=calico-typha -n kube-system

Result:

  1. NAME READY STATUS RESTARTS AGE
  2. calico-typha-66498ddfbd-2pzsr 1/1 Running 0 69s
  3. calico-typha-66498ddfbd-lrtzw 1/1 Running 0 50s
  4. calico-typha-66498ddfbd-scckd 1/1 Running 0 62s

Install Service

calico/node uses a Kubernetes Service to get load-balanced access to Typha.

  1. kubectl apply -f - <<EOF
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: calico-typha
  6. namespace: kube-system
  7. labels:
  8. k8s-app: calico-typha
  9. spec:
  10. ports:
  11. - port: 5473
  12. protocol: TCP
  13. targetPort: calico-typha
  14. name: calico-typha
  15. selector:
  16. k8s-app: calico-typha
  17. EOF

Validate that Typha is using TLS.

  1. TYPHA_CLUSTERIP=$(kubectl get svc -n kube-system calico-typha -o jsonpath='{.spec.clusterIP}')
  2. curl https://$TYPHA_CLUSTERIP:5473 -v --cacert typhaca.crt

Result

  1. * Rebuilt URL to: https://10.103.120.116:5473/
  2. * Trying 10.103.120.116...
  3. * TCP_NODELAY set
  4. * Connected to 10.103.120.116 (10.103.120.116) port 5473 (#0)
  5. * ALPN, offering h2
  6. * ALPN, offering http/1.1
  7. * successfully set certificate verify locations:
  8. * CAfile: typhaca.crt
  9. CApath: /etc/ssl/certs
  10. * (304) (OUT), TLS handshake, Client hello (1):
  11. * (304) (IN), TLS handshake, Server hello (2):
  12. * TLSv1.2 (IN), TLS handshake, Certificate (11):
  13. * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
  14. * TLSv1.2 (IN), TLS handshake, Request CERT (13):
  15. * TLSv1.2 (IN), TLS handshake, Server finished (14):
  16. * TLSv1.2 (OUT), TLS handshake, Certificate (11):
  17. * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
  18. * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
  19. * TLSv1.2 (OUT), TLS handshake, Finished (20):
  20. * TLSv1.2 (IN), TLS alert, Server hello (2):
  21. * error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate
  22. * stopped the pause stream!
  23. * Closing connection 0
  24. curl: (35) error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate

This demonstrates that Typha is presenting its TLS certificate and rejecting our connection because we do not present a certificate. In the next lab we will deploy calico/node with a certificate Typha will accept.

Next

Install calico/node