Tutorial: How to Use Cilium Hubble for Observability in CNI Chaining Mode (Part 1)


In this tutorial, I’ll show you how to use Cilium Hubble for network observability, without relying on Cilium for your Container Network Interface (CNI), by making use of CNI chaining.
What is Hubble?
Hubble is a fully distributed networking and security observability platform, built on top of Cilium and eBPF to bring programmable observability to highly dynamic workload environments, such as Kubernetes clusters.
Note: If your Kubernetes cluster has a managed Cilium installation, for example, GKE Dataplane v2, please seek guidance from the cloud provider to enable Hubble in your environment.
What is CNI chaining?
CNI chaining is a feature of the CNI specification that makes it possible to connect together multiple CNI plugins to perform different tasks when creating new container workloads.
Sometimes, it’s not always possible to change to Cilium as your default CNI in your existing Kubernetes environment. However you still want to take advantage of some of the features of Cilium, such as Hubble for observability. An example of this would be for Telco’s who need to take advantage of differing features between two CNIs for specific workloads, but still want to use Hubble for better visibility into their cluster workload.
Getting Ready
In this walkthrough, I’m going to keep it simple and use a Kind Cluster, so that you can recreate these steps locally, however, you should also be able to transfer this knowledge to a full Kubernetes deployment running in your chosen location/platform.
By default, Kind uses several CNI plugins to establish container IP addressing, routing and port forwarding. Instead of replacing all of that with Cilium’s CNI, I’m going to leave it as it is and install Cilium’s CNI at the end of the existing plugin chain.
The details in this walkthrough generally apply to most CNI Chaining scenarios and CNI Plugins, such as those provided by Cloud providers (AWS VPC CNI, AKS AzureNet) and common CNI’s such as Calico, and Antrea. In fact, Cilium has specialized chaining modes for different CNI providers as well as the generic veth chaining mode I’ll be using here.
I’m using the generic veth chaining mode for the Kind cluster because Kind makes use of the ptp CNI plugin, creating veth pairs as part of container network provisioning. In fact, the generic veth chaining mode is appropriate for several CNI providers, including Calico.
Prerequisites
Before we start, you’ll want to download a few additional tools to your workstation:
- Kind – A container-based Kubernetes distribution, useful for development and testing
- Cilium CLI – A helpful tool for managing Cilium installs and upgrades
- Hubble CLI – This will let you dive into the Hubble observability data from your workstation terminal.
Walkthrough
First, let’s create a simple Kind cluster using the default CNI using this kind-config.yaml
file for a three node cluster.
Now create the cluster with the following command.
This provides us with a completely functional three node development Kubernetes cluster.
Now let’s install Cilium using CNI chaining mode. You can either use the Cilium CLI tool.
Or you can install Cilium using Helm.
Both ways rely on the same helm charts, the benefit of using the Cilium CLI tool is that it will interrogate your cluster configuration and attempt to determine the required install options for you, like the number of Cilium operator replicas to configure via helm values. You can also use the Cilium CLI command to provide overall Cilium operational status:
The --wait
flag instructs the status command for Cilium to be installed. While we wait for Cilium container images to be downloaded and installed into the cluster, let’s review the CNI chaining relevant helm chart options that were used in the install command.
The --set cni.chainingTarget=kindnet
option, instructs Cilium to find the CNI configuration named “kindnet” and append the Cilium CNI to the end of that CNI plugin chain.
The --set cni.chainingMode=generic-veth
argument instructs Cilium to treat this as a generic veth CNI scenario. In this mode, instead of creating its own veth pair, the Cilium CNI will make use of veth device information passed through the CNI plugin chain, in tasks such as attaching eBPF observability programs.
The --set routingMode=native
option instructs Cilium to operate in native routing mode, and to not create its own encapsulated network.
The --set enable*Masquerade=false
options instruct Cilium not to provide NAT masquerading functionality on the cluster nodes as to not interfere with the existing CNI plugins masquerading settings.
The cilium status --wait
command should be finished now, let’s take a look:
All the expected Cilium components appear to be up and running and passing the health checks. We can do further testing by running the Cilium CLI tools connectivity tests to make sure connectivity between workloads and the outside world works as expected.
Connectivity Tests
With Cilium installed I can run the connectivity tests. All tests should pass except for the L7 policy tests. This is expected. L7 policy is an advanced feature that requires Cilium to handle aspects of container routing and is incompatible with CNI chaining mode. But do you know what is compatible, Hubble, the observability tool that is the focus of this blog post.