Nico Vibert is Senior Technical Marketing Engineer at Isovalent – the company behind the open-source cloud native solution Cilium. Nico has worked in many different roles – operations and support, design and architecture, technical pre-sales – at companies such as HashiCorp, VMware and Cisco. Nico’s focus is primarily on network, cloud and automation and he loves creating content and writing books. Nico regularly speaks at events, whether on a large scale such as VMworld, Cisco Live or at smaller forums such as VMware and AWS User Groups or virtual events such as HashiCorp HashiTalks. Outside of Isovalent, Nico’s passionate about intentional diversity & inclusion initiatives and is Chief DEI Officer at the Open Technology organization OpenUK. You can find out more about him on his blog.
IPv6 Networking and Observability with Cilium and Hubble
[10:00] In this video, Senior Technical Marketing Engineer Nico Vibert will walk you through how to deploy a IPv4/IPv6 Dual Stack Kubernetes cluster and install Cilium and Hubble to benefit from their networking and observability capabilities.
Welcome to the Cilium Flash episode on IPv6 networking observability. So, in this demo, I’m going to walk you through how to deploy your IPv6 dual stack Kubernetes cluster and install Cilium and Hubble to benefit from the networking and observability capabilities. And this last point is very important because visibility of IPv6 flows is essential. One of the reasons IPv6 has been slow to adopt is probably because of fears that it would be hard to operate and manage. And I think that’s a fair concern, but as you’ll see with, by using a tool like Hubble, it should really help operators understanding and visualize how IPv6 applications communicate together. So, let’s go straight into the demo.
So, Dual Stack has been enabled by default in Kubernetes 1.23 and it’s fully supported using Kind. So, we’re going to be deploying a cluster in dual stack mode. And in Kind, we only need to set a few parameters. So, first we’re going to disable the default CNI because we want to install Cilium in dual stack mode. The main thing here is to set IP family to dual and finally you can see API server address sent to 127.0.0.1 and that’s the local address on the host for Kubernetes, the Kubernetes API server. And the reason we need this on the Mac is because IPv6 port forward doesn’t work on Docker on Mac.
So, let’s go ahead and create the cluster. It will just take a few seconds. We’re just speeding up for the demo. And the first thing to notice is that the nodes themselves pick up IPv4 and IPv6 addresses. And you can see the Pod CIDR from which IPv4 and IPv6 addresses will be allocated to your Pods. We can now move on to installing Cilium in dual stack mode. You only need one parameter to enable IPv6 on Cilium, which is IPv6 enable equals true, as it’s disabled by default. And that’s it. That’s the only command required to enable Cilium in IPv6 mode. You can see that Cilium was successfully configured and enabled. We don’t have Hubble yet. That’s what we’re going to be doing in the next task.
Now, again, we’re just using the standard Cilium commands to enable Hubble, so cilium hubble enable -ui. We’ll have a look at the user interface, but we’ll also be using the Hubble CLI to observe our IPv6 flows.
Now, we can just launch the Hubble UI using Cilium Hubble UI. And if we launch the Hubble UI in our browser…
and just zooming in and click on the namespace where we’re going to deploy our application. Now, unsurprisingly, there’s no flows right now because we don’t have any applications communicating over IPv6. So, let’s deploy a couple of pods in the service. So what we’ve got here is a couple of different pods, and actually they are pinned to different nodes. So, we can see really communication between two pods in different nodes. And let’s just see IP addresses be allocated to our pods. Not yet, containers are still being created. Let’s just verify this. Yeah, it’ll just take a few seconds for them to be ready. And we’ll collect the IP address. And we can see we’ve got our IPv6 IP address allocated to our pod.
Now, let’s verify that we have IPv6 connectivity by running a ping from pod-worker to pod-worker2. The ping is successful, so we’ve got pod to pod connectivity with pods being located in different nodes. Let’s check how to service connectivity. We’re going to deploy this configuration. We’ve got a couple of ReplicaSets with some five echo servers and a service fronting these servers. Notice that we are using dual stack as the IP family policy and IPv4 IPv6 for this service. When we deploy this deployment and this service…
we are getting an IPv6 address. Yes, we are. We get an IPv6 and IPv4 address for the service.
Now, before we conclude, we have a quadruple A record allocated which, as you can see, gets an IPv6 address from the DNS lookup for echoserver.default. If we do a curl to this name, curl is successful. That’s from pod-worker to the service. If we also do a curl request to the IP address, again, we will be successful. We’ve got connectivity working straight to the DNS name or to the IP address. Let’s go back to Hubble and see that we’ve got visibility of our flows. You can see pod-worker can successfully talk to the echo server. If you click on the columns, you can get more information. You can see the source IP, destination IP, IPv6 addresses and information about the service. As you can see, we were doing some curl requests to the service. You can see the TCP flags as well for the traffic. We can also use the CLI to get visibility into our flows.
If we use Hubble observe, you can see communication. You can see the namespace first. You can see from the pod-worker to pod-worker2. That’s the ping request we were doing. You can see the IPv6 echo and echo reply. That’s successful. You can also see the identity of the pod in brackets and the HTTP request we were executing to the echo server.
Now, next, what we also want to do is,, we want to print the name of the node. Again, it’s just to showcase that we were doing a communication between, you know, different nodes using IPv6. And that works successfully. So, Hubble really gives us lots of valuable information and insight about our Kubernetes flows. And whether it’s awareness of the Kubernetes objects like namespaces and labels, but also the name of the actual pod. Now, by default, Hubble would translate the IPv6 address to a pod name. But you can disable this if you just want to see the IPv6 address. You can use iptranslation equals false to display the source and destination in IPv6. And you can also showcase the traffic based on the type of traffic, whether it’s TCP or ICMPv6.
That’s it. Hopefully, this quick video shows you how you can gain visibility into your IPv6 traffic using Cilium and Hubble. Thanks for watching.