Tutorial: How to Use Cilium Hubble for Observability in CNI Chaining Mode (Part 1)
In this tutorial, I’ll show you how to use Cilium Hubble for network observability, without relying on Cilium for your Container Network Interface (CNI), by making use of CNI chaining.
What is Hubble?
Hubble is a fully distributed networking and security observability platform, built on top of Cilium and eBPF to bring programmable observability to highly dynamic workload environments, such as Kubernetes clusters.
Note: If your Kubernetes cluster has a managed Cilium installation, for example, GKE Dataplane v2, please seek guidance from the cloud provider to enable Hubble in your environment.
What is CNI chaining?
CNI chaining is a feature of the CNI specification that makes it possible to connect together multiple CNI plugins to perform different tasks when creating new container workloads.
Sometimes, it’s not always possible to change to Cilium as your default CNI in your existing Kubernetes environment. However you still want to take advantage of some of the features of Cilium, such as Hubble for observability. An example of this would be for Telco’s who need to take advantage of differing features between two CNIs for specific workloads, but still want to use Hubble for better visibility into their cluster workload.
Getting Ready
In this walkthrough, I’m going to keep it simple and use a Kind Cluster, so that you can recreate these steps locally, however, you should also be able to transfer this knowledge to a full Kubernetes deployment running in your chosen location/platform.
By default, Kind uses several CNI plugins to establish container IP addressing, routing and port forwarding. Instead of replacing all of that with Cilium’s CNI, I’m going to leave it as it is and install Cilium’s CNI at the end of the existing plugin chain.
The details in this walkthrough generally apply to most CNI Chaining scenarios and CNI Plugins, such as those provided by Cloud providers (AWS VPC CNI, AKS AzureNet) and common CNI’s such as Calico, and Antrea. In fact, Cilium has specialized chaining modes for different CNI providers as well as the generic veth chaining mode I’ll be using here.
I’m using the generic veth chaining mode for the Kind cluster because Kind makes use of the ptp CNI plugin, creating veth pairs as part of container network provisioning. In fact, the generic veth chaining mode is appropriate for several CNI providers, including Calico.
Prerequisites
Before we start, you’ll want to download a few additional tools to your workstation:
- Kind – A container-based Kubernetes distribution, useful for development and testing
- Cilium CLI – A helpful tool for managing Cilium installs and upgrades
- Hubble CLI – This will let you dive into the Hubble observability data from your workstation terminal.
Walkthrough
First, let’s create a simple Kind cluster using the default CNI using this kind-config.yaml
file for a three node cluster.
Now create the cluster with the following command.
This provides us with a completely functional three node development Kubernetes cluster.
Now let’s install Cilium using CNI chaining mode. You can either use the Cilium CLI tool.
Or you can install Cilium using Helm.
Both ways rely on the same helm charts, the benefit of using the Cilium CLI tool is that it will interrogate your cluster configuration and attempt to determine the required install options for you, like the number of Cilium operator replicas to configure via helm values. You can also use the Cilium CLI command to provide overall Cilium operational status:
The --wait
flag instructs the status command for Cilium to be installed. While we wait for Cilium container images to be downloaded and installed into the cluster, let’s review the CNI chaining relevant helm chart options that were used in the install command.
The --set cni.chainingTarget=kindnet
option, instructs Cilium to find the CNI configuration named “kindnet” and append the Cilium CNI to the end of that CNI plugin chain.
The --set cni.chainingMode=generic-veth
argument instructs Cilium to treat this as a generic veth CNI scenario. In this mode, instead of creating its own veth pair, the Cilium CNI will make use of veth device information passed through the CNI plugin chain, in tasks such as attaching eBPF observability programs.
The --set routingMode=native
option instructs Cilium to operate in native routing mode, and to not create its own encapsulated network.
The --set enable*Masquerade=false
options instruct Cilium not to provide NAT masquerading functionality on the cluster nodes as to not interfere with the existing CNI plugins masquerading settings.
The cilium status --wait
command should be finished now, let’s take a look:
All the expected Cilium components appear to be up and running and passing the health checks. We can do further testing by running the Cilium CLI tools connectivity tests to make sure connectivity between workloads and the outside world works as expected.
Connectivity Tests
With Cilium installed I can run the connectivity tests. All tests should pass except for the L7 policy tests. This is expected. L7 policy is an advanced feature that requires Cilium to handle aspects of container routing and is incompatible with CNI chaining mode. But do you know what is compatible, Hubble, the observability tool that is the focus of this blog post.
Hubble observability without Cilium managed CNI
Enabling Hubble works exactly the same way regardless of whether CNI chaining mode is used:
Here I’m making use of the new upgrade functionality available in recent versions of Cilium CLI tool. Helm users should find this upgrade command familiar. The --reuse-values
option instructs the upgrade to start off with previously used helm values and then I’m appending two additional helm values before during the upgrade.
The first new option hubble.relay.enabled=true
will ensure the upgrade installs the Hubble Relay service into the cluster. I’ll need the Hubble Relay service to provide cluster-wide observability. The second new option --set hubble.ui.enabled=true
will install the Hubble UI service into the cluster. The Hubble UI will provide a service map visualization constructed from the Hubble network flow events.
Once the upgrade command returns, I can check to make sure everything is ready with:
Before I can use the Hubble CLI tool — that I installed in preparation for setting up my workstation for this tutorial, I need to port forward the Hubble Relay service in the Kind cluster to my workstation’s host network. I can do this easily with the Cilium CLI tool:
To access the Hubble UI I can run the following command:
This will open up a browser tab connected to the Hubble UI service port-forwarded to my laptop.
Now I can run the Cilium connectivity tests again, and I can observe all the L3/L4 network flows in the cilium-test namespace with the Hubble CLI tool.
I can even get a service map view of the namespace using the Hubble UI.
There it is, L3/L4 observability using Hubble without Cilium providing the CNI for this cluster.
And, of course, this also works with the Isovalent Enterprise for Cilium as well, with the added network observability benefits, for example, Hubble Timescape, which provides a historical datastore for network flows and Tetragon process events.
Recap
Hopefully this gives you something to consider. If you find yourself relying on your existing CNI but needing to take advantage of the eBPF powered capabilities of Cilium and Hubble for observability, then Cilium’s CNI chaining mode provides you a method to implement these features without disrupting your existing CNI or workloads.
If you want to learn more about all the observability features you get with Hubble when you move to using Cilium as your CNI, you can read the recent Hubble re-introduction blog post, or
watch this video. If you want to get hands-on with Hubble, I recommend the Isovalent Lab : Isovalent Enterprise for Cilium: Connectivity Visibility which takes you through all the Hubble Observability features such as network flows, DNS and HTTP protocol visibility and enterprise features such as Hubble Timescape.
If you’re ready to leave the Kind cluster environment, and are interested in trying this out in a production relevant cloud service scenario, take a look at this blog post, which covers how to use Cilium CNI chaining, with EKS.
In part 2, we’ll dive into other CNI Chaining modes, and how we can implement Cilium Network Policies ontop of another CNI.
Thanks for reading.
Jef is a technical community advocate at Isovalent.com. He has more than a decade of experience in the technology industry; as software engineer, open source contributor, IoT hardware developer, operations, and most recently as a community advocate. Prior to joining Isovalent, he was the Principal Developer Advocate at Sensu Inc.