Install CNI plugin

Kubernetes uses the Container Network Interface (CNI) to interact with networking providers like Calico. The Calico binary that presents this API to Kubernetes is called the CNI plugin and must be installed on every node in the Kubernetes cluster.

To understand how the Container Network Interface (CNI) works with Kubernetes, and how it enhances Kubernetes networking, read our Kubernetes CNI guide.

Provision Kubernetes user account for the plugin

The CNI plugin interacts with the Kubernetes API server while creating pods, both to obtain additional information and to update the datastore with information about the pod.

On the Kubernetes control plane node, create a key for the CNI plugin to authenticate with and certificate signing request.

  1. openssl req -newkey rsa:4096 \
  2. -keyout cni.key \
  3. -nodes \
  4. -out cni.csr \
  5. -subj "/CN=calico-cni"

We will sign this certificate using the main Kubernetes CA.

  1. sudo openssl x509 -req -in cni.csr \
  2. -CA /etc/kubernetes/pki/ca.crt \
  3. -CAkey /etc/kubernetes/pki/ca.key \
  4. -CAcreateserial \
  5. -out cni.crt \
  6. -days 365
  7. sudo chown $(id -u):$(id -g) cni.crt

Next, we create a kubeconfig file for the CNI plugin to use to access Kubernetes. Copy this cni.kubeconfig file to every node in the cluster.

  1. APISERVER=$(kubectl config view -o jsonpath='{.clusters[0].cluster.server}')
  2. kubectl config set-cluster kubernetes \
  3. --certificate-authority=/etc/kubernetes/pki/ca.crt \
  4. --embed-certs=true \
  5. --server=$APISERVER \
  6. --kubeconfig=cni.kubeconfig
  7. kubectl config set-credentials calico-cni \
  8. --client-certificate=cni.crt \
  9. --client-key=cni.key \
  10. --embed-certs=true \
  11. --kubeconfig=cni.kubeconfig
  12. kubectl config set-context default \
  13. --cluster=kubernetes \
  14. --user=calico-cni \
  15. --kubeconfig=cni.kubeconfig
  16. kubectl config use-context default --kubeconfig=cni.kubeconfig

Provision RBAC

Define a cluster role the CNI plugin will use to access Kubernetes.

  1. kubectl apply -f - <<EOF
  2. kind: ClusterRole
  3. apiVersion: rbac.authorization.k8s.io/v1
  4. metadata:
  5. name: calico-cni
  6. rules:
  7. # The CNI plugin needs to get pods, nodes, and namespaces.
  8. - apiGroups: [""]
  9. resources:
  10. - pods
  11. - nodes
  12. - namespaces
  13. verbs:
  14. - get
  15. # The CNI plugin patches pods/status.
  16. - apiGroups: [""]
  17. resources:
  18. - pods/status
  19. verbs:
  20. - patch
  21. # These permissions are required for Calico CNI to perform IPAM allocations.
  22. - apiGroups: ["crd.projectcalico.org"]
  23. resources:
  24. - blockaffinities
  25. - ipamblocks
  26. - ipamhandles
  27. verbs:
  28. - get
  29. - list
  30. - create
  31. - update
  32. - delete
  33. - apiGroups: ["crd.projectcalico.org"]
  34. resources:
  35. - ipamconfigs
  36. - clusterinformations
  37. - ippools
  38. verbs:
  39. - get
  40. - list
  41. EOF

Bind the cluster role to the calico-cni account.

  1. kubectl create clusterrolebinding calico-cni --clusterrole=calico-cni --user=calico-cni

Install the plugin

Do these steps on each node in your cluster.

Run these commands as root.

  1. sudo su

Install the CNI plugin Binaries

  1. curl -L -o /opt/cni/bin/calico https://github.com/projectcalico/cni-plugin/releases/download/v3.14.0/calico-amd64
  2. chmod 755 /opt/cni/bin/calico
  3. curl -L -o /opt/cni/bin/calico-ipam https://github.com/projectcalico/cni-plugin/releases/download/v3.14.0/calico-ipam-amd64
  4. chmod 755 /opt/cni/bin/calico-ipam

Create the config directory

  1. mkdir -p /etc/cni/net.d/

Copy the kubeconfig from the previous section

  1. cp cni.kubeconfig /etc/cni/net.d/calico-kubeconfig
  2. chmod 600 /etc/cni/net.d/calico-kubeconfig

Write the CNI configuration

  1. cat > /etc/cni/net.d/10-calico.conflist <<EOF
  2. {
  3. "name": "k8s-pod-network",
  4. "cniVersion": "0.3.1",
  5. "plugins": [
  6. {
  7. "type": "calico",
  8. "log_level": "info",
  9. "datastore_type": "kubernetes",
  10. "mtu": 1500,
  11. "ipam": {
  12. "type": "calico-ipam"
  13. },
  14. "policy": {
  15. "type": "k8s"
  16. },
  17. "kubernetes": {
  18. "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
  19. }
  20. },
  21. {
  22. "type": "portmap",
  23. "snat": true,
  24. "capabilities": {"portMappings": true}
  25. }
  26. ]
  27. }
  28. EOF

Exit from su and go back to the logged in user.

  1. exit

At this point Kubernetes nodes will become Ready because Kubernetes has a networking provider and configuration installed.

  1. kubectl get nodes

Next

Install Typha