Back to blog

Tutorial: How to Use Cilium Hubble for Observability in CNI Chaining Mode (Part 1)

Jef Spaleta
Jef Spaleta
Published: Updated: Cilium
Tutorial: How to Use Cilium Hubble for Observability in CNI Chaining Mode (Part 1)

In this tutorial, I’ll show you how to use Cilium Hubble for network observability, without relying on Cilium for your Container Network Interface (CNI), by making use of CNI chaining.

What is Hubble?

Hubble is a fully distributed networking and security observability platform, built on top of Cilium and eBPF to bring programmable observability to highly dynamic workload environments, such as Kubernetes clusters. 

Note: If your Kubernetes cluster has a managed Cilium installation, for example, GKE Dataplane v2, please seek guidance from the cloud provider to enable Hubble in your environment.

What is CNI chaining?

CNI chaining is a feature of the CNI specification that makes it possible to connect together multiple CNI plugins to perform different tasks when creating new container workloads. 

Sometimes, it’s not always possible to change to Cilium as your default CNI in your existing Kubernetes environment. However you still want to take advantage of some of the features of Cilium, such as Hubble for observability. An example of this would be for Telco’s who need to take advantage of differing features between two CNIs for specific workloads, but still want to use Hubble for better visibility into their cluster workload.  

Getting Ready

In this walkthrough, I’m going to keep it simple and use a Kind Cluster, so that you can recreate these steps locally, however, you should also be able to transfer this knowledge to a full Kubernetes deployment running in your chosen location/platform.

By default, Kind uses several CNI plugins to establish container IP addressing, routing and port forwarding. Instead of replacing all of that with Cilium’s CNI, I’m going to leave it as it is and install Cilium’s CNI at the end of the existing plugin chain.

The details in this walkthrough generally apply to most CNI Chaining scenarios and CNI Plugins, such as those provided by Cloud providers (AWS VPC CNI, AKS AzureNet) and common CNI’s such as Calico, and Antrea.  In fact, Cilium has specialized chaining modes for different CNI providers as well as the generic veth chaining mode I’ll be using here.


I’m using the generic veth chaining mode for the Kind cluster because Kind makes use of the ptp CNI plugin, creating veth pairs as part of container network provisioning. In fact, the generic veth chaining mode is appropriate for several CNI providers, including Calico.

Prerequisites

Before we start, you’ll want to download a few additional tools to your workstation:

  • Kind – A container-based Kubernetes distribution, useful for development and testing
  • Cilium CLI – A helpful tool for managing Cilium installs and upgrades
  • Hubble CLI – This will let you dive into the Hubble observability data from your workstation terminal. 

Walkthrough

First, let’s create a simple Kind cluster using the default CNI using this kind-config.yaml file for a three node cluster.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
networking:
  disableDefaultCNI: false

Now create the cluster with the following command.

$ kind create cluster --config kind-config.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.27.1) 🖼
 ✓ Preparing nodes  📦 📦  
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
 ✓ Joining worker nodes 🚜

This provides us with a completely functional three node development Kubernetes cluster.

Now let’s install Cilium using CNI chaining mode.  You can either use the Cilium CLI tool.

$ cilium install --version 1.14.1 --set cni.chainingTarget=kindnet  --set cni.chainingMode=generic-veth --set routingMode=native --set enableIPv4Masquerade=false --set enableIPv6Masquerade=false

Or you can install Cilium using Helm.

$ helm install --version 1.14.1 cilium cilium/cilium -n kube-system --set operator.replicas=1 --set cni.chainingTarget=kindnet  --set cni.chainingMode=generic-veth --set routingMode=native --set enableIPv4Masquerade=false --set enableIPv6Masquerade=false

Both ways rely on the same helm charts, the benefit of using the Cilium CLI tool is that it will interrogate your cluster configuration and attempt to determine the required install options for you, like the number of Cilium operator replicas to configure via helm values.  You can also use the Cilium CLI command to provide overall Cilium operational status:

$ cilium status --wait

The --wait flag instructs the status command for Cilium to be installed. While we wait for Cilium container images to be downloaded and installed into the cluster, let’s review the CNI chaining relevant helm chart options that were used in the install command.

The --set cni.chainingTarget=kindnet option, instructs Cilium to find the CNI configuration named “kindnet” and append the Cilium CNI to the end of that CNI plugin chain.

The --set cni.chainingMode=generic-veth argument instructs Cilium to treat this as a generic veth CNI scenario.  In this mode, instead of creating its own veth pair, the Cilium CNI will make use of veth device information passed through the CNI plugin chain, in tasks such as attaching eBPF observability programs.

The --set routingMode=native option instructs Cilium to operate in native routing mode, and to not create its own encapsulated network. 

The --set enable*Masquerade=false options instruct Cilium not to provide NAT masquerading functionality on the cluster nodes as to not interfere with the existing CNI plugins masquerading settings.
The cilium status --wait command should be finished now, let’s take a look:

$ cilium status --wait
    /¯¯\
 /¯¯\__/¯¯\	Cilium:         	OK
 \__/¯¯\__/	Operator:       	OK
 /¯¯\__/¯¯\	Envoy DaemonSet:	disabled (using embedded mode)
 \__/¯¯\__/	Hubble Relay:   	disabled
    \__/   	ClusterMesh:    	disabled

Deployment         	cilium-operator	Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet          	cilium         	Desired: 3, Ready: 3/3, Available: 3/3
Containers:        	cilium         	Running: 3
                   	cilium-operator	Running: 1
Cluster Pods:      	2/3 managed by Cilium
Helm chart version:	1.14.1
Image versions     	cilium         	quay.io/cilium/cilium:v1.14.1@sha256:edc1d05ea1365c4a8f6ac6982247d5c145181704894bb698619c3827b6963a72: 3
                   	cilium-operator	quay.io/cilium/operator-generic:v1.14.1@sha256:e061de0a930534c7e3f8feda8330976367971238ccafff42659f104effd4b5f7: 1

All the expected Cilium components appear to be up and running and passing the health checks. We can do further testing by running the Cilium CLI tools connectivity tests to make sure connectivity between workloads and the outside world works as expected.

Connectivity Tests

With Cilium installed I can run the connectivity tests. All tests should pass except for the L7 policy tests. This is expected. L7 policy is an advanced feature that requires Cilium to handle aspects of container routing and is incompatible with CNI chaining mode. But do you know what is compatible, Hubble, the observability tool that is the focus of this blog post.

$ cilium connectivity test
...
📋 Test Report
2/2 tests failed (6/24 actions), 53 tests skipped, 0 scenarios skipped:
Test [echo-ingress-l7]:
...

Hubble observability without Cilium managed CNI

Enabling Hubble works exactly the same way regardless of whether CNI chaining mode is used:

$ cilium upgrade --version 1.14.1 --reuse-values --set hubble.relay.enabled=true --set hubble.ui.enabled=true

Here I’m making use of the new upgrade functionality available in recent versions of Cilium CLI tool. Helm users should find this upgrade command familiar. The --reuse-values option instructs the upgrade to start off with previously used helm values and then I’m appending two additional helm values before during the upgrade.

The first new option hubble.relay.enabled=true will ensure the upgrade installs the Hubble Relay service into the cluster. I’ll need the Hubble Relay service to provide cluster-wide observability. The second new option --set hubble.ui.enabled=true will install the Hubble UI service into the cluster. The Hubble UI will provide a service map visualization constructed from the Hubble network flow events.

Once the upgrade command returns, I can check to make sure everything is ready with:

$ cilium status --wait
    /¯¯\
 /¯¯\__/¯¯\	  Cilium:         	OK
 \__/¯¯\__/	  Operator:       	OK
 /¯¯\__/¯¯\	  Envoy DaemonSet:	disabled (using embedded mode)
 \__/¯¯\__/	  Hubble Relay:   	OK
    \__/      ClusterMesh:    	disabled
Deployment    hubble-relay   	Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet     cilium         	Desired: 3, Ready: 3/3, Available: 3/3
Deployment    hubble-ui      	Desired: 1, Ready: 1/1, Available: 1/1
Deployment    cilium-operator	Desired: 1, Ready: 1/1, Available: 1/1
Containers:   cilium         	Running: 3
              hubble-ui      	Running: 1
              cilium-operator	Running: 1
              hubble-relay   	Running: 1

Before I can use the Hubble CLI tool — that I installed in preparation for setting up my workstation for this tutorial, I need to port forward the Hubble Relay service in the Kind cluster to my workstation’s host network. I can do this easily with the Cilium CLI tool:

$ cilium hubble port-forward &

To access the Hubble UI I can run the following command:

$ cilium hubble ui

This will open up a browser tab connected to the Hubble UI service port-forwarded to my laptop.

Now I can run the Cilium connectivity tests again, and I can observe all the L3/L4 network flows in the cilium-test namespace with the Hubble CLI tool. 

$ hubble observe -n cilium-test
...
Aug 15 19:13:48.052: 10.244.2.34:50592 (host) -> cilium-test/echo-other-node-545c9b778b-9t98s:8080 (ID:50303) to-endpoint FORWARDED (TCP Flags: ACK)
Aug 15 19:13:48.053: 10.244.2.34:50592 (host) -> cilium-test/echo-other-node-545c9b778b-9t98s:8080 (ID:50303) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Aug 15 19:13:48.093: 10.244.1.102:50206 (host) -> cilium-test/echo-same-node-965bbc7d4-24zgc:8181 (ID:14424) to-endpoint FORWARDED (TCP Flags: SYN)
...

I can even get a service map view of the namespace using the Hubble UI.

There it is, L3/L4 observability using Hubble without Cilium providing the CNI for this cluster.

And, of course, this also works with the Isovalent Enterprise for Cilium as well, with the added network observability benefits, for example, Hubble Timescape, which provides a historical datastore for network flows and Tetragon process events.

Recap

Hopefully this gives you something to consider. If you find yourself relying on your existing CNI but needing to take advantage of the eBPF powered capabilities of Cilium and Hubble for observability, then Cilium’s CNI chaining mode provides you a method to implement these features without disrupting your existing CNI or workloads.

If you want to learn more about all the observability features you get with Hubble when you move to using Cilium as your CNI, you can read the recent Hubble re-introduction blog post, or
watch this video. If you want to get hands-on with Hubble, I recommend the Isovalent Lab : Isovalent Enterprise for Cilium: Connectivity Visibility which takes you through all the Hubble Observability features such as network flows, DNS and HTTP protocol visibility and enterprise features such as Hubble Timescape

If you’re ready to leave the Kind cluster environment, and are interested in trying this out in a production relevant cloud service scenario, take a look at this blog post, which covers how to use Cilium CNI chaining, with EKS.

In part 2, we’ll dive into other CNI Chaining modes, and how we can implement Cilium Network Policies ontop of another CNI.

Thanks for reading. 

Jef Spaleta
AuthorJef SpaletaTechnical Community Advocate

Related

Cilium Hubble Cheatsheet – Kubernetes Network Observability in a Nutshell

Getting started with Cilium Hubble, the observability tooling, is now easier with our Cheat Sheet and CLI walkthrough video.

Cilium Hubble Cheatsheet – Kubernetes Network Observability in a Nutshell
Dean Lewis

Cilium Hubble Series (Part 1): Re-introducing Hubble

In this first post in this new Hubble series, learn about the Why/What/How of Hubble!

Cilium Hubble Series (Part 1): Re-introducing Hubble
Nico Vibert

Isovalent Enterprise for Cilium: Connectivity Visibility with Hubble

This lab provides an introduction to Isovalent Enterprise for Cilium capabilities related to connectivity observability. This track primarily focuses on Hubble Flow events that provide label-aware, DNS-aware, and API-aware visibility for network connectivity within a Kubernetes environment using Hubble CLI, Hubble UI and Hubble Timescape, which provides historical data for troubleshooting.

Industry insights you won’t delete. Delivered to your inbox weekly.