Networking and security observability with Hubble

This guide provides a walkthrough of setting up a local Kubernetes cluster with Hubble and Cilium installed, in order to demonstrate some of Hubble’s capabilities.

If you haven’t read the Introduction to Cilium & Hubble yet, we’d encourage you to do that first.

The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.

Set up a Kubernetes cluster

To run a Kubernetes cluster on your local machine, you have the choice to either set up a single-node cluster with minikube, or a local multi-node cluster on Docker using kind:

  • minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) and is the easiest way to run a Kubernetes cluster locally.
  • kind runs a multi-node Kubernetes using Docker container to emulate cluster nodes. It allows you to experiment with the cluster-wide observability features of Hubble Relay.

When unsure about the option to pick, follow the instructions for minikube as it is less likely to cause friction.

Single-node cluster with minikube

Multi-node cluster with kind

Install kubectl & minikube

  1. Install kubectl version >= v1.10.0 as described in the Kubernetes Docs
  2. Install minikube >= v1.3.1 as per minikube documentation: Install Minikube.

Note

It is important to validate that you have minikube v1.3.1 installed. Older versions of minikube are shipping a kernel configuration that is not compatible with the TPROXY requirements of Cilium >= 1.6.0.

  1. minikube version
  2. minikube version: v1.3.1
  3. commit: ca60a424ce69a4d79f502650199ca2b52f29e631
  1. Create a minikube cluster:
  1. minikube start --network-plugin=cni --memory=4096

Note

If minikube is deployed as a container (that is if docker is the configured driver), then kube-proxy replacement features like host-reachable services may not work (GitHub issue). If you experience Kubernetes service load-balancing issues, then set any other driver from the supported list.

  1. minikube start --cni=cilium --memory=4096
  2. # Only available for minikube >= v1.12.1

Note

From minikube v1.12.1+, cilium networking plugin can be enabled directly with --cni=cilium parameter in minikube start command. With this flag enabled, minikube will not only mount eBPF file system but also deploy quick-install.yaml automatically. However, this may not install the latest version of cilium.

  1. Mount the eBPF filesystem
  1. minikube ssh -- sudo mount bpffs -t bpf /sys/fs/bpf

Note

In case of installing Cilium for a specific Kubernetes version, the --kubernetes-version vx.y.z parameter can be appended to the minikube start command for bootstrapping the local cluster. By default, minikube will install the most recent version of Kubernetes.

Install dependencies

  1. Install docker stable as described in Install Docker Engine
  2. Install kubectl version >= v1.14.0 as described in the Kubernetes Docs
  3. Install helm >= v3.0.3 per Helm documentation: Installing Helm
  4. Install kind >= v0.7.0 per kind documentation: Installation and Usage

Configure kind

Configuring kind cluster creation is done using a YAML configuration file. This step is necessary in order to disable the default CNI and replace it with Cilium.

Create a kind-config.yaml file based on the following template. It will create a cluster with 3 worker nodes and 1 control-plane node.

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. - role: worker
  6. - role: worker
  7. - role: worker
  8. networking:
  9. disableDefaultCNI: true

By default, the latest version of Kubernetes from when the kind release was created is used.

To change the version of Kubernetes being run, image has to be defined for each node. See the Node Configuration documentation for more information.

Tip

By default, kind uses the following pod and service subnets:

  1. Networking.PodSubnet = "10.244.0.0/16"
  2. Networking.ServiceSubnet = "10.96.0.0/12"

If any of these subnets conflicts with your local network address range, update the networking section of the kind configuration file to specify different subnets that do not conflict or you risk having connectivity issues when deploying Cilium. For example:

  1. networking:
  2. disableDefaultCNI: true
  3. podSubnet: "10.10.0.0/16"
  4. serviceSubnet: "10.11.0.0/16"

Create a cluster

To create a cluster with the configuration defined above, pass the kind-config.yaml you created with the --config flag of kind.

  1. kind create cluster --config=kind-config.yaml

After a couple of seconds or minutes, a 4 nodes cluster should be created.

A new kubectl context (kind-kind) should be added to KUBECONFIG or, if unset, to ${HOME}/.kube/config:

  1. kubectl cluster-info --context kind-kind

Note

The cluster nodes will remain in state NotReady until Cilium is deployed. This behavior is expected.

Preload images

Preload the cilium image into each worker node in the kind cluster:

  1. docker pull cilium/cilium:v1.9.8
  2. kind load docker-image cilium/cilium:v1.9.8

Deploy Cilium and Hubble

This section shows how to install Cilium, enable Hubble and deploy Hubble Relay and Hubble’s graphical UI.

Single-node cluster with minikube

Multi-node cluster with kind

Deploy Hubble and Cilium with the provided pre-rendered YAML manifest:

  1. kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml
  2. kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-hubble-install.yaml

Note

First, make sure you have Helm 3 installed. Helm 2 is no longer supported.

Setup Helm repository:

  1. helm repo add cilium https://helm.cilium.io/

Deploy Hubble and Cilium with the following Helm command:

  1. helm install cilium cilium/cilium --version 1.9.8 \
  2. --namespace kube-system \
  3. --set nodeinit.enabled=true \
  4. --set kubeProxyReplacement=partial \
  5. --set hostServices.enabled=false \
  6. --set externalIPs.enabled=true \
  7. --set nodePort.enabled=true \
  8. --set hostPort.enabled=true \
  9. --set image.pullPolicy=IfNotPresent \
  10. --set ipam.mode=kubernetes \
  11. --set hubble.enabled=true \
  12. --set hubble.listenAddress=":4244" \
  13. --set hubble.relay.enabled=true \
  14. --set hubble.ui.enabled=true

Validate the Installation

You can monitor as Cilium and all required components are being installed:

  1. kubectl -n kube-system get pods --watch
  2. NAME READY STATUS RESTARTS AGE
  3. cilium-2rlwx 0/1 Init:0/2 0 2s
  4. cilium-ncqtb 0/1 Init:0/2 0 2s
  5. cilium-node-init-9h9dd 0/1 ContainerCreating 0 2s
  6. cilium-node-init-cmks4 0/1 ContainerCreating 0 2s
  7. cilium-node-init-vnx5n 0/1 ContainerCreating 0 2s
  8. cilium-node-init-zhs66 0/1 ContainerCreating 0 2s
  9. cilium-nrzsp 0/1 Init:0/2 0 2s
  10. cilium-operator-599dbcf854-7w4rr 0/1 Pending 0 2s
  11. cilium-pghbg 0/1 Init:0/2 0 2s
  12. coredns-66bff467f8-gnzk7 0/1 Pending 0 6m6s
  13. coredns-66bff467f8-wzh49 0/1 Pending 0 6m6s
  14. etcd-kind-control-plane 1/1 Running 0 6m15s
  15. hubble-relay-5684848cc8-6ldhj 0/1 ContainerCreating 0 2s
  16. hubble-ui-54c6bc4cdc-h5drq 0/1 Pending 0 2s
  17. kube-apiserver-kind-control-plane 1/1 Running 0 6m15s
  18. kube-controller-manager-kind-control-plane 1/1 Running 0 6m15s
  19. kube-proxy-dchqv 1/1 Running 0 5m51s
  20. kube-proxy-jkvhr 1/1 Running 0 5m53s
  21. kube-proxy-nb9b2 1/1 Running 0 6m5s
  22. kube-proxy-ttf7z 1/1 Running 0 5m50s
  23. kube-scheduler-kind-control-plane 1/1 Running 0 6m15s
  24. cilium-node-init-zhs66 1/1 Running 0 4s

It may take a couple of minutes for all components to come up:

  1. kubectl -n kube-system get pods
  2. NAME READY STATUS RESTARTS AGE
  3. cilium-2rlwx 1/1 Running 0 16m
  4. cilium-ncqtb 1/1 Running 0 16m
  5. cilium-node-init-9h9dd 1/1 Running 1 16m
  6. cilium-node-init-cmks4 1/1 Running 1 16m
  7. cilium-node-init-vnx5n 1/1 Running 1 16m
  8. cilium-node-init-zhs66 1/1 Running 1 16m
  9. cilium-nrzsp 1/1 Running 0 16m
  10. cilium-operator-599dbcf854-7w4rr 1/1 Running 0 16m
  11. cilium-pghbg 1/1 Running 0 16m
  12. coredns-66bff467f8-gnzk7 1/1 Running 0 22m
  13. coredns-66bff467f8-wzh49 1/1 Running 0 22m
  14. etcd-kind-control-plane 1/1 Running 0 22m
  15. hubble-relay-5684848cc8-2z6qk 1/1 Running 0 21s
  16. hubble-ui-54c6bc4cdc-g5mgd 1/1 Running 0 17s
  17. kube-apiserver-kind-control-plane 1/1 Running 0 22m
  18. kube-controller-manager-kind-control-plane 1/1 Running 0 22m
  19. kube-proxy-dchqv 1/1 Running 0 21m
  20. kube-proxy-jkvhr 1/1 Running 0 21m
  21. kube-proxy-nb9b2 1/1 Running 0 22m
  22. kube-proxy-ttf7z 1/1 Running 0 21m
  23. kube-scheduler-kind-control-plane 1/1 Running 0 22m

Accessing the Graphical User Interface

Hubble provides a graphical user interface which displays a service map of your service dependencies. To access Hubble UI, you can use the following command to forward the port of the web frontend to your local machine:

  1. kubectl port-forward -n kube-system svc/hubble-ui --address 0.0.0.0 --address :: 12000:80

Open http://localhost:12000 in your browser. You should see a screen with an invitation to select a namespace, use the namespace selector dropdown on the left top corner to select a namespace:

../../_images/hubble_service_map_namespace_selector.png

In this example, we are deploying the Star Wars demo from the Identity-Aware and HTTP-Aware Policy Enforcement guide. However you can apply the same techniques to observe application connectivity dependencies in your own namespace, and clusters for application of any type.

Once the deployment is ready, issue a request from both spaceships to emulate some traffic.

  1. $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
  2. Ship landed
  3. $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
  4. Ship landed

These requests will then be displayed in the UI as service dependencies between the different pods:

../../_images/hubble_sw_service_map.png

In the bottom of the interface, you may also inspect each recent Hubble flow event in your current namespace individually.

Inspecting a wide variety of network traffic

The “connectivity-check” generates a wide variety of network traffic, including packets sent outside the cluster and packets dropped by policy.

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods. It is recommended to create a separate namespace for this.

  1. kubectl create ns cilium-test

Deploy the check with:

  1. kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a series of deployments which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

  1. $ kubectl get pods -n cilium-test
  2. NAME READY STATUS RESTARTS AGE
  3. echo-a-76c5d9bd76-q8d99 1/1 Running 0 66s
  4. echo-b-795c4b4f76-9wrrx 1/1 Running 0 66s
  5. echo-b-host-6b7fc94b7c-xtsff 1/1 Running 0 66s
  6. host-to-b-multi-node-clusterip-85476cd779-bpg4b 1/1 Running 0 66s
  7. host-to-b-multi-node-headless-dc6c44cb5-8jdz8 1/1 Running 0 65s
  8. pod-to-a-79546bc469-rl2qq 1/1 Running 0 66s
  9. pod-to-a-allowed-cnp-58b7f7fb8f-lkq7p 1/1 Running 0 66s
  10. pod-to-a-denied-cnp-6967cb6f7f-7h9fn 1/1 Running 0 66s
  11. pod-to-b-intra-node-nodeport-9b487cf89-6ptrt 1/1 Running 0 65s
  12. pod-to-b-multi-node-clusterip-7db5dfdcf7-jkjpw 1/1 Running 0 66s
  13. pod-to-b-multi-node-headless-7d44b85d69-mtscc 1/1 Running 0 66s
  14. pod-to-b-multi-node-nodeport-7ffc76db7c-rrw82 1/1 Running 0 65s
  15. pod-to-external-1111-d56f47579-d79dz 1/1 Running 0 66s
  16. pod-to-external-fqdn-allow-google-cnp-78986f4bcf-btjn7 1/1 Running 0 66s

Note

If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending state. This is expected since these pods need at least 2 nodes to be scheduled successfully.

To see the traffic in Hubble, open http://localhost:12000/cilium-test in your browser.

Inspecting the cluster’s network traffic with Hubble Relay

Now let’s install the Hubble CLI on your PC/laptop. This will allow you to inspect the traffic using Hubble Relay.

Linux

MacOS

Windows

Download the latest hubble release:

  1. export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
  2. curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz"
  3. curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-amd64.tar.gz.sha256sum"
  4. sha256sum --check hubble-linux-amd64.tar.gz.sha256sum
  5. tar zxf hubble-linux-amd64.tar.gz

and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

  1. sudo mv hubble /usr/local/bin

Download the latest hubble release:

  1. export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
  2. curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz"
  3. curl -LO "https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-darwin-amd64.tar.gz.sha256sum"
  4. shasum -a 256 -c hubble-darwin-amd64.tar.gz.sha256sum
  5. tar zxf hubble-darwin-amd64.tar.gz

and move the hubble CLI to a directory listed in the $PATH environment variable. For example:

  1. sudo mv hubble /usr/local/bin

Download the latest hubble release:

  1. curl -LO "https://raw.githubusercontent.com/cilium/hubble/master/stable.txt"
  2. set /p HUBBLE_VERSION=<stable.txt
  3. curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz"
  4. curl -LO "https://github.com/cilium/hubble/releases/download/%HUBBLE_VERSION%/hubble-windows-amd64.tar.gz.sha256sum"
  5. certutil -hashfile hubble-windows-amd64.tar.gz SHA256
  6. type hubble-windows-amd64.tar.gz.sha256sum
  7. :: verify that the checksum from the two commands above match
  8. tar zxf hubble-windows-amd64.tar.gz

and move the hubble.exe CLI to a directory listed in the %PATH% environment variable after extracting it from the tarball.

In order to access Hubble Relay with the hubble CLI, let’s make sure to port-forward the Hubble Relay service locally:

  1. $ kubectl port-forward -n kube-system svc/hubble-relay --address 0.0.0.0 --address :: 4245:80

Note

This terminal window needs to be remain open to keep port-forwarding in place. Open a separate terminal window to use the hubble CLI.

Confirm that the Hubble Relay service is healthy via hubble status:

  1. $ hubble status --server localhost:4245
  2. Healthcheck (via localhost:4245): Ok
  3. Max Flows: 16384

In order to avoid passing --server localhost:4245 to every command, you may export the following environment variable:

  1. $ export HUBBLE_SERVER=localhost:4245

Let’s now issue some requests to emulate some traffic again. This first request is allowed by the policy.

  1. $ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
  2. Ship landed

This next request is accessing an HTTP endpoint which is denied by policy.

  1. $ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
  2. Access denied

Finally, this last request will hang because the xwing pod does not have the org=empire label required by policy. Press Control-C to kill the curl request, or wait for it to time out.

  1. $ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
  2. command terminated with exit code 28

Let’s now inspect this traffic using the CLI. The command below filters all traffic on the application layer (L7, HTTP) to the deathstar pod:

  1. $ hubble observe --pod deathstar --protocol http
  2. TIMESTAMP SOURCE DESTINATION TYPE VERDICT SUMMARY
  3. Jun 18 13:52:23.843 default/tiefighter:52568 default/deathstar-5b7489bc84-8wvng:80 http-request FORWARDED HTTP/1.1 POST http://deathstar.default.svc.cluster.local/v1/request-landing
  4. Jun 18 13:52:23.844 default/deathstar-5b7489bc84-8wvng:80 default/tiefighter:52568 http-response FORWARDED HTTP/1.1 200 0ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing)
  5. Jun 18 13:52:31.019 default/tiefighter:52628 default/deathstar-5b7489bc84-8wvng:80 http-request DROPPED HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port

The following command shows all traffic to the deathstar pod that has been dropped:

  1. $ hubble observe --pod deathstar --verdict DROPPED
  2. TIMESTAMP SOURCE DESTINATION TYPE VERDICT SUMMARY
  3. Jun 18 13:52:31.019 default/tiefighter:52628 default/deathstar-5b7489bc84-8wvng:80 http-request DROPPED HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port
  4. Jun 18 13:52:38.321 default/xwing:34138 default/deathstar-5b7489bc84-v4s7d:80 Policy denied DROPPED TCP Flags: SYN
  5. Jun 18 13:52:38.321 default/xwing:34138 default/deathstar-5b7489bc84-v4s7d:80 Policy denied DROPPED TCP Flags: SYN
  6. Jun 18 13:52:39.327 default/xwing:34138 default/deathstar-5b7489bc84-v4s7d:80 Policy denied DROPPED TCP Flags: SYN

Feel free to further inspect the traffic. To get help for the observe command, use hubble help observe.

Cleanup

Once you are done experimenting with Hubble, you can remove all traces of the cluster by running the following command:

Single-node cluster with minikube

Multi-node cluster with kind

  1. minikube delete
  1. kind delete cluster