Installation using Kops

As of kops 1.9 release, Cilium can be plugged into kops-deployed clusters as the CNI plugin. This guide provides steps to create a Kubernetes cluster on AWS using kops and Cilium as the CNI plugin. Note, the kops deployment will automate several deployment features in AWS by default, including AutoScaling, Volumes, VPCs, etc.

Kops offers several out-of-the-box configurations of Cilium including Kubernetes Without kube-proxy, AWS ENI, and dedicated etcd cluster for Cilium. This guide will just go through a basic setup.

Prerequisites

  • aws cli
  • kubectl
  • aws account with permissions: * AmazonEC2FullAccess * AmazonRoute53FullAccess * AmazonS3FullAccess * IAMFullAccess * AmazonVPCFullAccess

Installing kops

Linux

MacOS

  1. curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
  2. chmod +x kops-linux-amd64
  3. sudo mv kops-linux-amd64 /usr/local/bin/kops
  1. brew update && brew install kops

Setting up IAM Group and User

Assuming you have all the prerequisites, run the following commands to create the kops user and group:

  1. $ # Create IAM group named kops and grant access
  2. $ aws iam create-group --group-name kops
  3. $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops
  4. $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops
  5. $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops
  6. $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops
  7. $ aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops
  8. $ aws iam create-user --user-name kops
  9. $ aws iam add-user-to-group --user-name kops --group-name kops
  10. $ aws iam create-access-key --user-name kops

kops requires the creation of a dedicated S3 bucket in order to store the state and representation of the cluster. You will need to change the bucket name and provide your unique bucket name (for example a reverse of FQDN added with short description of the cluster). Also make sure to use the region where you will be deploying the cluster.

  1. $ aws s3api create-bucket --bucket prefix-example-com-state-store --region us-west-2 --create-bucket-configuration LocationConstraint=us-west-2
  2. $ export KOPS_STATE_STORE=s3://prefix-example-com-state-store

The above steps are sufficient for getting a working cluster installed. Please consult kops aws documentation for more detailed setup instructions.

Cilium Prerequisites

  • Ensure the System Requirements are met, particularly the Linux kernel and key-value store versions.

The default AMI satisfies the minimum kernel version required by Cilium, which is what we will use in this guide.

Creating a Cluster

  • Note that you will need to specify the --master-zones and --zones for creating the master and worker nodes. The number of master zones should be * odd (1, 3, …) for HA. For simplicity, you can just use 1 region.
  • To keep things simple when following this guide, we will use a gossip-based cluster. This means you do not have to create a hosted zone upfront. cluster NAME variable must end with k8s.local to use the gossip protocol. If creating multiple clusters using the same kops user, then make the cluster name unique by adding a prefix such as com-company-emailid-.
  1. $ export NAME=com-company-emailid-cilium.k8s.local
  2. $ kops create cluster --state=${KOPS_STATE_STORE} --node-count 3 --topology private --master-zones us-west-2a,us-west-2b,us-west-2c --zones us-west-2a,us-west-2b,us-west-2c --networking cilium --cloud-labels "Team=Dev,Owner=Admin" ${NAME} --yes

You may be prompted to create a ssh public-private key pair.

  1. $ ssh-keygen

(Please see Deleting a Cluster)

Validate the Installation

Cilium CLI

Manually

Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).

Linux

macOS

Other

  1. curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
  2. sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
  3. sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
  4. rm cilium-linux-amd64.tar.gz{,.sha256sum}
  1. curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-darwin-amd64.tar.gz{,.sha256sum}
  2. shasum -a 256 -c cilium-darwin-amd64.tar.gz.sha256sum
  3. sudo tar xzvfC cilium-darwin-amd64.tar.gz /usr/local/bin
  4. rm cilium-darwin-amd64.tar.gz{,.sha256sum}

See the full page of releases.

To validate that Cilium has been properly installed, you can run

  1. $ cilium status --wait
  2. /¯¯\
  3. /¯¯\__/¯¯\ Cilium: OK
  4. \__/¯¯\__/ Operator: OK
  5. /¯¯\__/¯¯\ Hubble: disabled
  6. \__/¯¯\__/ ClusterMesh: disabled
  7. \__/
  8. DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
  9. Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
  10. Containers: cilium-operator Running: 2
  11. cilium Running: 2
  12. Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
  13. cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2

Run the following command to validate that your cluster has proper network connectivity:

  1. $ cilium connectivity test
  2. ℹ️ Monitor aggregation detected, will skip some flow validation steps
  3. [k8s-cluster] Creating namespace for connectivity check...
  4. (...)
  5. ---------------------------------------------------------------------------------------------------------------------
  6. 📋 Test Report
  7. ---------------------------------------------------------------------------------------------------------------------
  8. 69/69 tests successful (0 warnings)

Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉

You can monitor as Cilium and all required components are being installed:

  1. $ kubectl -n kube-system get pods --watch
  2. NAME READY STATUS RESTARTS AGE
  3. cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
  4. cilium-s8w5m 0/1 PodInitializing 0 7s
  5. coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
  6. coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s

It may take a couple of minutes for all components to come up:

  1. cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
  2. cilium-s8w5m 1/1 Running 0 4m12s
  3. coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
  4. coredns-86c58d9df4-4l6b2 1/1 Running 0 13m

You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.

  1. kubectl create ns cilium-test

Deploy the check with:

  1. kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

  1. $ kubectl get pods -n cilium-test
  2. NAME READY STATUS RESTARTS AGE
  3. echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
  4. echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
  5. echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
  6. host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
  7. host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
  8. pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
  9. pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
  10. pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
  11. pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
  12. pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
  13. pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
  14. pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
  15. pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
  16. pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s

Note

If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending state. This is expected since these pods need at least 2 nodes to be scheduled successfully.

Once done with the test, remove the cilium-test namespace:

  1. kubectl delete ns cilium-test

Deleting a Cluster

To undo the dependencies and other deployment features in AWS from the kops cluster creation, use kops to destroy a cluster immediately with the parameter --yes:

  1. $ kops delete cluster ${NAME} --yes

Further reading on using Cilium with Kops

Appendix: Details of kops flags used in cluster creation

The following section explains all the flags used in create cluster command.

  • --state=${KOPS_STATE_STORE} : KOPS uses an S3 bucket to store the state of your cluster and representation of your cluster
  • --node-count 3 : No. of worker nodes in the kubernetes cluster.
  • --topology private : Cluster will be created with private topology, what that means is all masters/nodes will be launched in a private subnet in the VPC
  • --master-zones eu-west-1a,eu-west-1b,eu-west-1c : The 3 zones ensure the HA of master nodes, each belonging in a different Availability zones.
  • --zones eu-west-1a,eu-west-1b,eu-west-1c : Zones where the worker nodes will be deployed
  • --networking cilium : Networking CNI plugin to be used - cilium. You can also use cilium-etcd, which will use a dedicated etcd cluster as key/value store instead of CRDs.
  • --cloud-labels "Team=Dev,Owner=Admin" : Labels for your cluster that will be applied to your instances
  • ${NAME} : Name of the cluster. Make sure the name ends with k8s.local for a gossip based cluster