Almost 25 years after its creation, IPv6 adoption is steadily (if slowly) growing. According to Google’s statistics, availability of IPv6 connectivity has grown to 40% of Google users worldwide. In the cloud native space, the vast majority of users did not require the infinite IP space that IPv6 provided.
It is, however, changing.
Telcos and carriers, large webscalers, IoT organizations : they all require the scale that IPv6 provides. Kubernetes’ IPv6 support has improved over the years, with an important milestone arriving last year: Dual-stack IPv4/IPv6 Networking Reached General Availability in Kubernetes 1.23. It means that Kubernetes is not only IPv6-ready but it also provides a transitional pathway from IPv4 to IPv6.
With Dual Stack, each pod is allocated both an IPv4 and an IPv6 address, so it can communicate both with IPv6 systems and the legacy apps and cloud services that use IPv4.
In order to run Dual Stack on Kubernetes, you need a CNI that supports it: of course, Cilium does. In order to operate Dual Stack and manage the added complexity that comes with IPv6 (128-bit addresses are not exactly easy to remember), you should consider an observability platform like Hubble.
This blog post will walk you through how to deploy a IPv4/IPv6 Dual Stack Kubernetes cluster and install Cilium and Hubble to benefit from their networking and observability capabilities.
The very short version of this tutorial can be seen below in 43 seconds. If you want to do it yourself, follow the instructions further below.
Here are my step by step instructions. To make it easy, we’ll be leveraging Kind so that you can test it yourself. If you already have a Dual Stack cluster, you can skip to Step 2.
Step 1: Deploy a Dual Stack Kubernetes Cluster
First, deploy a Kubernetes cluster with Kind (click on the link to install it if you don’t have it already). You can use the following YAML configuration (save it as cluster.yaml for example):
disableDefaultCNI is set to true as Cilium will be deployed instead of the default CNI.
ipFamily set to dual for Dual Stack (IPv4 and IPv6 support). More details can be found on the official Kubernetes docs.
apiServerAddress set to 127.0.0.1 (This is the listen address on the host for Kubernetes API Server. Because IPv6 port forwards don’t work on Docker on Windows or Mac, you need to use an IPv4 port forward. It is not needed on Linux. Read more on the kind docs).
Deploy the cluster and you should be up and running in a couple of minutes:
$ kind create cluster --config cluster.yaml
Creating cluster "kind"...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"You can now use your cluster with:
kubectl cluster-info --context kind-kind
Have a nice day! 👋
The first thing to notice is that the nodes themselves pick up both an IPv4 and an IPv6 address:
The next step is to install Cilium. That’s required for IP address management and connectivity and also for flow visibility (as the observability platform Hubble is built on top of Cilium).
If you don’t have the Cilium CLI, install and download it via the official Cilium docs. The CLI itself is an easy tool to install and manage Cilium.
Once that’s installed, go ahead and enable Cilium in dual stack mode. Simply set the parameter --helm-set ipv6.enabled to true (IPv6 is disabled by default). Note we are not disabling IPv4 (it’s enabled by default) and will therefore be operating in Dual Stack mode.
nicovibert:~$ cilium install --helm-set ipv6.enabled=true
🔮 Auto-detected Kubernetes kind: kind
✨ Running "kind" validation checks
✅ Detected kind version "0.14.0"ℹ️ Using Cilium version 1.12.1
🔮 Auto-detected cluster name: kind-kind
🔮 Auto-detected datapath mode: tunnel
🔮 Auto-detected kube-proxy has been installed
ℹ️ helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kind-kind,encryption.nodeEncryption=false,ipam.mode=kubernetes,ipv6.enabled=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vxlan
ℹ️ Storing helm values filein kube-system/cilium-cli-helm-values Secret
🔑 Created CA in secret cilium-ca
🔑 Generating certificates for Hubble...
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap for Cilium version 1.12.1...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed and ready...
✅ Cilium was successfully installed! Run 'cilium status' to view installation health
By this stage, when you run cilium status, it should look like this:
Again here, if you don’t have it already, I recommend you download and install the Hubble client (follow the official Hubble docs).
It’s a single command to enable Hubble. Don’t forget the --ui if you’re planning on visualizing the flow on the Hubble UI.
nicovibert:~$ cilium hubble enable --ui
🔑 Found CA in secret cilium-ca
ℹ️ helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kind-kind,encryption.nodeEncryption=false,hubble.enabled=true,hubble.relay.enabled=true,hubble.tls.ca.cert=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNGRENDQWJxZ0F3SUJBZ0lVQ3BrYU5rdElURExSWHhDMjJjZFBGdmxESnRjd0NnWUlLb1pJemowRUF3SXcKYURFTE1Ba0dBMVVFQmhNQ1ZWTXhGakFVQmdOVkJBZ1REVk5oYmlCR2NtRnVZMmx6WTI4eEN6QUpCZ05WQkFjVApBa05CTVE4d0RRWURWUVFLRXdaRGFXeHBkVzB4RHpBTkJnTlZCQXNUQmtOcGJHbDFiVEVTTUJBR0ExVUVBeE1KClEybHNhWFZ0SUVOQk1CNFhEVEl5TURrd05URXlNRGd3TUZvWERUSTNNRGt3TkRFeU1EZ3dNRm93YURFTE1Ba0cKQTFVRUJoTUNWVk14RmpBVUJnTlZCQWdURFZOaGJpQkdjbUZ1WTJselkyOHhDekFKQmdOVkJBY1RBa05CTVE4dwpEUVlEVlFRS0V3WkRhV3hwZFcweER6QU5CZ05WQkFzVEJrTnBiR2wxYlRFU01CQUdBMVVFQXhNSlEybHNhWFZ0CklFTkJNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUVoemZlTmRUWDl1RWNFcjByQlU3b21aYTYKSEhIbjNCd2VhL0liQnZBQ1NlWWl4QWY3MFI5Nm5qdjVYb1ZsWEE4RjJBZitJeE9wM2tUZzRGbGo0d0puRmFOQwpNRUF3RGdZRFZSMFBBUUgvQkFRREFnRUdNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdIUVlEVlIwT0JCWUVGT21yCjh2WnJiVTZ5MzhlVWNGc0p6OThRcUIxTU1Bb0dDQ3FHU000OUJBTUNBMGdBTUVVQ0lCTW81NGFDUzNYQW1adEQKNzNpZE1vaFMwNXVRaUJ6MzJXWVJVZmlzc2RnM0FpRUExcUQwY2FqL0lUdWJUM1RrdGE4QVBwcmxTOW9XSWZibQpvejE5eTlWZ3JlND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=,hubble.tls.ca.key=[--- REDACTED WHEN PRINTING TO TERMINAL (USE --redact-helm-certificate-keys=false TO PRINT) ---],hubble.ui.enabled=true,ipam.mode=kubernetes,ipv6.enabled=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vxlan
✨ Patching ConfigMap cilium-config to enable Hubble...
🚀 Creating ConfigMap for Cilium version 1.12.1...
♻️ Restarted Cilium pods
⌛ Waiting for Cilium to become ready before deploying other Hubble component(s)...
🚀 Creating Peer Service...
✨ Generating certificates...
🔑 Generating certificates for Relay...
✨ Deploying Relay...
✨ Deploying Hubble UI and Hubble UI Backend...
⌛ Waiting for Hubble to be installed...
ℹ️ Storing helm values filein kube-system/cilium-cli-helm-values Secret
✅ Hubble was successfully enabled!
You’re now ready to launch the Hubble UI with the following command:
nicovibert:~$ cilium hubble ui
ℹ️ Opening "http://localhost:12000"in your browser...
A browser should launch with the Hubble UI. Select the default namespace for now.
Leave the terminal running and move to a new one where you’re going to deploy applications to generate some traffic flow.
Step 4: Deploy Applications
Let’s start by deploying a client, named pod-worker, with this simple Pod manifest. I use the netshoot image in this example but you can use other images if you prefer.
Both pods are manually pinned to different hosts by using spec.nodeName. As a result, the successful ping below showed successful IPv6 connectivity between Pods on different nodes.
nicovibert:~$ IPv6=$(kubectl get pod pod-worker2 -o jsonpath='{.status.podIPs[1].ip}')nicovibert:~$ kubectl exec -it pod-worker -- ping$IPv6PING fd00:10:244:1::3203(fd00:10:244:1::3203)56 data bytes
64 bytes from fd00:10:244:1::3203: icmp_seq=1ttl=63time=2.93 ms
64 bytes from fd00:10:244:1::3203: icmp_seq=2ttl=63time=0.184 ms
64 bytes from fd00:10:244:1::3203: icmp_seq=3ttl=63time=0.171 ms
64 bytes from fd00:10:244:1::3203: icmp_seq=4ttl=63time=0.216 ms
You can now test Pod to Service connectivity. We’ll use an echo server (An echo server is a server that replicates the request sent by the client and sends it back).
You can use this manifest (link to GitHub) (a slightly modified and simplified version of this echo-server manifest). Notice the ipFamilyPolicy and ipFamilies Service settings required for IPv6 in this excerpt from the manifest:
Let’s go back to the Hubble UI. You should be able to see all your flows. To narrow down the results, you can filter based on the name of the pod to only see the flows you are interested in.
Hopefully you, like me, find this pretty cool: you can troubleshoot IPv6 connectivity issues without having to remember 128-bit addresses!
If you update the columns like I did, you can see some fields that are hidden by default:
If you prefer using the CLI, then that’s absolutely fine. Stop the terminal where you were running cilium hubble ui and instead we’re going to be running hubble observe.
If you run a continuous IPv6 ping from pod-worker to pod-worker2, you can easily see these flows with hubble observe --ipv6 --from-pod pod-worker:
Notice that, in the code output above, we had IPv6 addresses instead of the Pod name. By default, Hubble will translate IP address to logical names such as Pod name or FQDN. You can disable it if you want the source and destination IPv6 addresses by using the --ip-translation=false command.
And that’s it! Hopefully you can see how running IPv6 on Kubernetes does not need to be an operational nightmare, if you have the right tools in place.
Nico Vibert is a Senior Staff Technical Marketing Engineer at Isovalent, the company behind the open-source cloud-native solution Cilium.
Prior to joining Isovalent, Nico worked in many different roles—operations and support, design and architecture, and technical pre-sales—at companies such as HashiCorp, VMware, and Cisco.
In his current role, Nico focuses primarily on creating content to make networking a more approachable field and regularly speaks at events like KubeCon, VMworld, and Cisco Live.
Nico has held over 15 networking certifications, including the Cisco Certified Internetwork Expert CCIE (# 22990).
Nico is now the Lead Subject Matter Expert on the Cilium Certified Associate (CCA) certification.
Outside of Isovalent, Nico is passionate about intentional diversity & inclusion initiatives and is Chief DEI Officer at the Open Technology organization OpenUK. You can find out more about him on his blog.