Installation using Helm
This guide will show you how to install Cilium using Helm. This involves a couple of additional steps compared to the Quick Installation and requires you to manually select the best datapath and IPAM mode for your particular environment.
Install Cilium
Note
Make sure you have Helm 3 installed. Helm 2 is no longer supported.
Setup Helm repository:
helm repo add cilium https://helm.cilium.io/
Generic
GCP/GKE
Azure/AKS
AWS/EKS
OpenShift
RKE
k3s
These are the generic instructions on how to install Cilium into any Kubernetes cluster using the default configuration options below. Please see the other tabs for distribution/platform specific instructions which also list the ideal default configuration for particular platforms.
Default Configuration:
Datapath | IPAM | Datastore |
---|---|---|
Encapsulation | Cluster Pool | Kubernetes CRD |
Requirements:
- Kubernetes must be configured to use CNI (see Network Plugin Requirements)
- Linux kernel >= 4.9.17
Tip
See System Requirements for more details on the system requirements.
Install Cilium:
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.10.2 \
--namespace kube-system
To install Cilium on Google Kubernetes Engine (GKE), perform the following steps:
Default Configuration:
Datapath | IPAM | Datastore |
---|---|---|
Direct Routing | Kubernetes PodCIDR | Kubernetes CRD |
Requirements:
- The cluster must be created with the taint
node.cilium.io/agent-not-ready=true:NoSchedule
using--node-taints
option.
Install Cilium:
Extract the Cluster CIDR to enable native-routing:
NATIVE_CIDR="$(gcloud container clusters describe "${NAME} --zone "${ZONE}" --format 'value(clusterIpv4Cidr)')"
echo $NATIVE_CIDR
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.10.2 \
--namespace kube-system \
--set nodeinit.enabled=true \
--set nodeinit.reconfigureKubelet=true \
--set nodeinit.removeCbrBridge=true \
--set cni.binPath=/home/kubernetes/bin \
--set gke.enabled=true \
--set ipam.mode=kubernetes \
--set nativeRoutingCIDR=$NATIVE_CIDR
The NodeInit DaemonSet is required to prepare the GKE nodes as nodes are added to the cluster. The NodeInit DaemonSet will perform the following actions:
- Reconfigure kubelet to run in CNI mode
- Mount the eBPF filesystem
To install Cilium on Azure Kubernetes Service (AKS), perform the following steps:
Default Configuration:
Datapath | IPAM | Datastore |
---|---|---|
Direct Routing | Azure IPAM | Kubernetes CRD |
Tip
If you want to chain Cilium on top of the Azure CNI, refer to the guide Azure CNI.
Requirements:
- The AKS cluster must be created with
--network-plugin azure
for compatibility with Cilium. The Azure network plugin will be replaced with Cilium by the installer. - Node pools must also be created with the taint
node.cilium.io/agent-not-ready=true:NoSchedule
using--node-taints
option.
Limitations:
- All VMs and VM scale sets used in a cluster must belong to the same resource group.
Create a service principal:
In order to allow cilium-operator to interact with the Azure API, a service principal is required. You can reuse an existing service principal if you want but it is recommended to create a dedicated service principal for each Cilium installation:
az ad sp create-for-rbac --name cilium-operator-$RANDOM > azure-sp.json
The contents of azure-sp.json
should look like this:
{
"appId": "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa",
"displayName": "cilium-operator",
"name": "http://cilium-operator",
"password": "bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb",
"tenant": "cccccccc-cccc-cccc-cccc-cccccccccccc"
}
Extract the relevant credentials to access the Azure API:
AZURE_SUBSCRIPTION_ID="$(az account show | jq -r .id)"
AZURE_CLIENT_ID="$(jq -r .appId < azure-sp.json)"
AZURE_CLIENT_SECRET="$(jq -r .password < azure-sp.json)"
AZURE_TENANT_ID="$(jq -r .tenant < azure-sp.json)"
AZURE_NODE_RESOURCE_GROUP="$(az aks show --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME | jq -r .nodeResourceGroup)"
Note
AZURE_NODE_RESOURCE_GROUP
must be set to the resource group of the node pool, not the resource group of the AKS cluster.
Install Cilium:
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.10.2 \
--namespace kube-system \
--set azure.enabled=true \
--set azure.resourceGroup=$AZURE_NODE_RESOURCE_GROUP \
--set azure.subscriptionID=$AZURE_SUBSCRIPTION_ID \
--set azure.tenantID=$AZURE_TENANT_ID \
--set azure.clientID=$AZURE_CLIENT_ID \
--set azure.clientSecret=$AZURE_CLIENT_SECRET \
--set tunnel=disabled \
--set ipam.mode=azure \
--set enableIPv4Masquerade=false \
--set nodeinit.enabled=true
To install Cilium on Amazon Elastic Kubernetes Service (EKS), perform the following steps:
Default Configuration:
Datapath | IPAM | Datastore |
---|---|---|
Direct Routing (ENI) | AWS ENI | Kubernetes CRD |
For more information on AWS ENI mode, see AWS ENI.
Tip
If you want to chain Cilium on top of the AWS CNI, refer to the guide AWS VPC CNI plugin.
The following command creates a Kubernetes cluster with eksctl
using Amazon Elastic Kubernetes Service. See eksctl Installation for instructions on how to install eksctl
and prepare your account.
export NAME="$(whoami)-$RANDOM"
cat <<EOF >eks-config.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${NAME}
region: eu-west-1
managedNodeGroups:
- name: ng-1
desiredCapacity: 2
privateNetworking: true
# taint nodes so that application pods are
# not scheduled until Cilium is deployed.
taints:
- key: "node.cilium.io/agent-not-ready"
value: "true"
effect: "NoSchedule"
EOF
eksctl create cluster -f ./eks-config.yaml
Limitations:
- The AWS ENI integration of Cilium is currently only enabled for IPv4. If you want to use IPv6, use a datapath/IPAM mode other than ENI.
Delete VPC CNI (``aws-node`` DaemonSet)
Cilium will manage ENIs instead of VPC CNI, so the aws-node
DaemonSet has to be deleted to prevent conflict behavior.
kubectl -n kube-system delete daemonset aws-node
Install Cilium:
Deploy Cilium release via Helm:
helm install cilium cilium/cilium --version 1.10.2 \
--namespace kube-system \
--set eni.enabled=true \
--set ipam.mode=eni \
--set egressMasqueradeInterfaces=eth0 \
--set tunnel=disabled \
--set nodeinit.enabled=true
Note
This helm command sets eni.enabled=true
and tunnel=disabled
, meaning that Cilium will allocate a fully-routable AWS ENI IP address for each pod, similar to the behavior of the Amazon VPC CNI plugin.
This mode depends on a set of Required Privileges from the EC2 API.
Cilium can alternatively run in EKS using an overlay mode that gives pods non-VPC-routable IPs. This allows running more pods per Kubernetes worker node than the ENI limit, but means that pod connectivity to resources outside the cluster (e.g., VMs in the VPC or AWS managed services) is masqueraded (i.e., SNAT) by Cilium to use the VPC IP address of the Kubernetes worker node. Excluding the lines for eni.enabled=true
, ipam.mode=eni
and tunnel=disabled
from the helm command will configure Cilium to use overlay routing mode (which is the helm default).
Cilium is now deployed and you are ready to scale-up the cluster:
To install Cilium on OpenShift, perform the following steps:
Default Configuration:
Datapath | IPAM | Datastore |
---|---|---|
Encapsulation | Cluster Pool | Kubernetes CRD |
Requirements:
- OpenShift 4.x
Install Cilium:
Cilium is a Certified OpenShift CNI Plugin and is best installed when an OpenShift cluster is created using the OpenShift installer. Please refer to Installation on OpenShift OKD for more information.
To install Cilium on Rancher Kubernetes Engine (RKE), perform the following steps:
Note
If you are using RKE2, Cilium has been directly integrated. Please see Using Cilium in the RKE2 documentation. You can use either method.
Default Configuration:
Datapath | IPAM | Datastore |
---|---|---|
Encapsulation | Cluster Pool | Kubernetes CRD |
Requirements:
Follow the RKE Installation Guide with the below change:
From:
network:
options:
flannel_backend_type: "vxlan"
plugin: "canal"
To:
network:
plugin: none
Install Cilium:
Install Cilium via helm install
:
helm install cilium cilium/cilium --version 1.10.2 \
--namespace $CILIUM_NAMESPACE
To install Cilium on k3s, perform the following steps:
Default Configuration:
Datapath | IPAM | Datastore |
---|---|---|
Encapsulation | Cluster Pool | Kubernetes CRD |
Requirements:
- Install your k3s cluster as you would normally would but pass in
--flannel-backend=none
so you can install Cilium on top:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='--flannel-backend=none' sh -
Mount the eBPF Filesystem:
On each node, run the following to mount the eBPF Filesystem:
sudo mount bpffs -t bpf /sys/fs/bpf
Install Cilium:
helm install cilium cilium/cilium --version 1.10.2 \
--namespace $CILIUM_NAMESPACE
Restart unmanaged Pods
If you did not create a cluster with the nodes tainted with the taint node.cilium.io/agent-not-ready
, then unmanaged pods need to be restarted manually. Restart all already running pods which are not running in host-networking mode to ensure that Cilium starts managing them. This is required to ensure that all pods which have been running before Cilium was deployed have network connectivity provided by Cilium and NetworkPolicy applies to them:
$ kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,HOSTNETWORK:.spec.hostNetwork --no-headers=true | grep '<none>' | awk '{print "-n "$1" "$2}' | xargs -L 1 -r kubectl delete pod
pod "event-exporter-v0.2.3-f9c896d75-cbvcz" deleted
pod "fluentd-gcp-scaler-69d79984cb-nfwwk" deleted
pod "heapster-v1.6.0-beta.1-56d5d5d87f-qw8pv" deleted
pod "kube-dns-5f8689dbc9-2nzft" deleted
pod "kube-dns-5f8689dbc9-j7x5f" deleted
pod "kube-dns-autoscaler-76fcd5f658-22r72" deleted
pod "kube-state-metrics-7d9774bbd5-n6m5k" deleted
pod "l7-default-backend-6f8697844f-d2rq2" deleted
pod "metrics-server-v0.3.1-54699c9cc8-7l5w2" deleted
Note
This may error out on macOS due to -r
being unsupported by xargs
. In this case you can safely run this command without -r
with the symptom that this will hang if there are no pods to restart. You can stop this with ctrl-c
.
Validate the Installation
Cilium CLI
Manually
Install the latest version of the Cilium CLI. The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).
Linux
macOS
Other
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz{,.sha256sum}
curl -L --remote-name-all https://github.com/cilium/cilium-cli/releases/latest/download/cilium-darwin-amd64.tar.gz{,.sha256sum}
shasum -a 256 -c cilium-darwin-amd64.tar.gz.sha256sum
sudo tar xzvfC cilium-darwin-amd64.tar.gz /usr/local/bin
rm cilium-darwin-amd64.tar.gz{,.sha256sum}
See the full page of releases.
To validate that Cilium has been properly installed, you can run
$ cilium status --wait
/¯¯\
/¯¯\__/¯¯\ Cilium: OK
\__/¯¯\__/ Operator: OK
/¯¯\__/¯¯\ Hubble: disabled
\__/¯¯\__/ ClusterMesh: disabled
\__/
DaemonSet cilium Desired: 2, Ready: 2/2, Available: 2/2
Deployment cilium-operator Desired: 2, Ready: 2/2, Available: 2/2
Containers: cilium-operator Running: 2
cilium Running: 2
Image versions cilium quay.io/cilium/cilium:v1.9.5: 2
cilium-operator quay.io/cilium/operator-generic:v1.9.5: 2
Run the following command to validate that your cluster has proper network connectivity:
$ cilium connectivity test
ℹ️ Monitor aggregation detected, will skip some flow validation steps
✨ [k8s-cluster] Creating namespace for connectivity check...
(...)
---------------------------------------------------------------------------------------------------------------------
📋 Test Report
---------------------------------------------------------------------------------------------------------------------
✅ 69/69 tests successful (0 warnings)
Congratulations! You have a fully functional Kubernetes cluster with Cilium. 🎉
You can monitor as Cilium and all required components are being installed:
$ kubectl -n kube-system get pods --watch
NAME READY STATUS RESTARTS AGE
cilium-operator-cb4578bc5-q52qk 0/1 Pending 0 8s
cilium-s8w5m 0/1 PodInitializing 0 7s
coredns-86c58d9df4-4g7dd 0/1 ContainerCreating 0 8m57s
coredns-86c58d9df4-4l6b2 0/1 ContainerCreating 0 8m57s
It may take a couple of minutes for all components to come up:
cilium-operator-cb4578bc5-q52qk 1/1 Running 0 4m13s
cilium-s8w5m 1/1 Running 0 4m12s
coredns-86c58d9df4-4g7dd 1/1 Running 0 13m
coredns-86c58d9df4-4l6b2 1/1 Running 0 13m
You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.
kubectl create ns cilium-test
Deploy the check with:
kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.10/examples/kubernetes/connectivity-check/connectivity-check.yaml
It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
$ kubectl get pods -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s
Note
If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending
state. This is expected since these pods need at least 2 nodes to be scheduled successfully.
Once done with the test, remove the cilium-test
namespace:
kubectl delete ns cilium-test