Back to blog

Tutorial: Run and Observe IPv6 on Kubernetes with Cilium and Hubble

Nico Vibert
Nico Vibert
Published: Updated: Cilium
Tutorial: Run and Observe IPv6 on Kubernetes with Cilium and Hubble

Almost 25 years after its creation, IPv6 adoption is steadily (if slowly) growing. According to Google’s statistics, availability of IPv6 connectivity has grown to 40% of Google users worldwide. In the cloud native space, the vast majority of users did not require the infinite IP space that IPv6 provided.

It is, however, changing.

Telcos and carriers, large webscalers, IoT organizations : they all require the scale that IPv6 provides. Kubernetes’ IPv6 support has improved over the years, with an important milestone arriving last year: Dual-stack IPv4/IPv6 Networking Reached General Availability in Kubernetes 1.23. It means that Kubernetes is not only IPv6-ready but it also provides a transitional pathway from IPv4 to IPv6.

With Dual Stack, each pod is allocated both an IPv4 and an IPv6 address, so it can communicate both with IPv6 systems and the legacy apps and cloud services that use IPv4.

In order to run Dual Stack on Kubernetes, you need a CNI that supports it: of course, Cilium does. In order to operate Dual Stack and manage the added complexity that comes with IPv6 (128-bit addresses are not exactly easy to remember), you should consider an observability platform like Hubble.

This blog post will walk you through how to deploy a IPv4/IPv6 Dual Stack Kubernetes cluster and install Cilium and Hubble to benefit from their networking and observability capabilities.

The very short version of this tutorial can be seen below in 43 seconds. If you want to do it yourself, follow the instructions further below.

Here are my step by step instructions. To make it easy, we’ll be leveraging Kind so that you can test it yourself. If you already have a Dual Stack cluster, you can skip to Step 2.

Step 1: Deploy a Dual Stack Kubernetes Cluster

First, deploy a Kubernetes cluster with Kind (click on the link to install it if you don’t have it already). You can use the following YAML configuration (save it as cluster.yaml for example):

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  ipFamily: dual
  apiServerAddress: 127.0.0.1

The important parameters here are:

  • disableDefaultCNI is set to true as Cilium will be deployed instead of the default CNI.
  • ipFamily set to dual for Dual Stack (IPv4 and IPv6 support). More details can be found on the official Kubernetes docs.
  • apiServerAddress set to 127.0.0.1 (This is the listen address on the host for Kubernetes API Server. Because IPv6 port forwards don’t work on Docker on Windows or Mac, you need to use an IPv4 port forward. It is not needed on Linux. Read more on the kind docs).

Deploy the cluster and you should be up and running in a couple of minutes:

$ kind create cluster --config cluster.yaml
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 
 ✓ Preparing nodes 📦 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Have a nice day! 👋

The first thing to notice is that the nodes themselves pick up both an IPv4 and an IPv6 address:

$ kubectl describe nodes | grep -E 'InternalIP'
  InternalIP:  172.18.0.5
  InternalIP:  fc00:f853:ccd:e793::5
  InternalIP:  172.18.0.3
  InternalIP:  fc00:f853:ccd:e793::3
  InternalIP:  172.18.0.4
  InternalIP:  fc00:f853:ccd:e793::4
  InternalIP:  172.18.0.2
  InternalIP:  fc00:f853:ccd:e793::2

With the following command, you can see the PodCIDRs from which IPv4 and IPv6 addresses will be allocated to your Pods.

$ kubectl describe nodes | grep PodCIDRs
PodCIDRs:                     10.244.0.0/24,fd00:10:244::/64
PodCIDRs:                     10.244.3.0/24,fd00:10:244:3::/64
PodCIDRs:                     10.244.1.0/24,fd00:10:244:1::/64
PodCIDRs:                     10.244.2.0/24,fd00:10:244:2::/64

Step 2: Install Cilium in Dual Stack mode

The next step is to install Cilium. That’s required for IP address management and connectivity and also for flow visibility (as the observability platform Hubble is built on top of Cilium).

If you don’t have the Cilium CLI, install and download it via the official Cilium docs. The CLI itself is an easy tool to install and manage Cilium.

Once that’s installed, go ahead and enable Cilium in dual stack mode. Simply set the parameter --helm-set ipv6.enabled to true (IPv6 is disabled by default). Note we are not disabling IPv4 (it’s enabled by default) and will therefore be operating in Dual Stack mode.

nicovibert:~$ cilium install --helm-set ipv6.enabled=true
🔮 Auto-detected Kubernetes kind: kind
✨ Running "kind" validation checks
✅ Detected kind version "0.14.0"
ℹ️  Using Cilium version 1.12.1
🔮 Auto-detected cluster name: kind-kind
🔮 Auto-detected datapath mode: tunnel
🔮 Auto-detected kube-proxy has been installed
ℹ️  helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kind-kind,encryption.nodeEncryption=false,ipam.mode=kubernetes,ipv6.enabled=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vxlan
ℹ️  Storing helm values file in kube-system/cilium-cli-helm-values Secret
🔑 Created CA in secret cilium-ca
🔑 Generating certificates for Hubble...
🚀 Creating Service accounts...
🚀 Creating Cluster roles...
🚀 Creating ConfigMap for Cilium version 1.12.1...
🚀 Creating Agent DaemonSet...
🚀 Creating Operator Deployment...
⌛ Waiting for Cilium to be installed and ready...
✅ Cilium was successfully installed! Run 'cilium status' to view installation health

By this stage, when you run cilium status, it should look like this:

nicovibert:~$ cilium status 
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         disabled
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet         cilium             Desired: 4, Ready: 4/4, Available: 4/4
Containers:       cilium             Running: 4
                  cilium-operator    Running: 1
Cluster Pods:     4/4 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b: 4
                  cilium-operator    quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1: 1

Step 3: Enable Hubble

Again here, if you don’t have it already, I recommend you download and install the Hubble client (follow the official Hubble docs).

It’s a single command to enable Hubble. Don’t forget the --ui if you’re planning on visualizing the flow on the Hubble UI.

nicovibert:~$ cilium hubble enable --ui
🔑 Found CA in secret cilium-ca
ℹ️  helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kind-kind,encryption.nodeEncryption=false,hubble.enabled=true,hubble.relay.enabled=true,hubble.tls.ca.cert=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNGRENDQWJxZ0F3SUJBZ0lVQ3BrYU5rdElURExSWHhDMjJjZFBGdmxESnRjd0NnWUlLb1pJemowRUF3SXcKYURFTE1Ba0dBMVVFQmhNQ1ZWTXhGakFVQmdOVkJBZ1REVk5oYmlCR2NtRnVZMmx6WTI4eEN6QUpCZ05WQkFjVApBa05CTVE4d0RRWURWUVFLRXdaRGFXeHBkVzB4RHpBTkJnTlZCQXNUQmtOcGJHbDFiVEVTTUJBR0ExVUVBeE1KClEybHNhWFZ0SUVOQk1CNFhEVEl5TURrd05URXlNRGd3TUZvWERUSTNNRGt3TkRFeU1EZ3dNRm93YURFTE1Ba0cKQTFVRUJoTUNWVk14RmpBVUJnTlZCQWdURFZOaGJpQkdjbUZ1WTJselkyOHhDekFKQmdOVkJBY1RBa05CTVE4dwpEUVlEVlFRS0V3WkRhV3hwZFcweER6QU5CZ05WQkFzVEJrTnBiR2wxYlRFU01CQUdBMVVFQXhNSlEybHNhWFZ0CklFTkJNRmt3RXdZSEtvWkl6ajBDQVFZSUtvWkl6ajBEQVFjRFFnQUVoemZlTmRUWDl1RWNFcjByQlU3b21aYTYKSEhIbjNCd2VhL0liQnZBQ1NlWWl4QWY3MFI5Nm5qdjVYb1ZsWEE4RjJBZitJeE9wM2tUZzRGbGo0d0puRmFOQwpNRUF3RGdZRFZSMFBBUUgvQkFRREFnRUdNQThHQTFVZEV3RUIvd1FGTUFNQkFmOHdIUVlEVlIwT0JCWUVGT21yCjh2WnJiVTZ5MzhlVWNGc0p6OThRcUIxTU1Bb0dDQ3FHU000OUJBTUNBMGdBTUVVQ0lCTW81NGFDUzNYQW1adEQKNzNpZE1vaFMwNXVRaUJ6MzJXWVJVZmlzc2RnM0FpRUExcUQwY2FqL0lUdWJUM1RrdGE4QVBwcmxTOW9XSWZibQpvejE5eTlWZ3JlND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=,hubble.tls.ca.key=[--- REDACTED WHEN PRINTING TO TERMINAL (USE --redact-helm-certificate-keys=false TO PRINT) ---],hubble.ui.enabled=true,ipam.mode=kubernetes,ipv6.enabled=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vxlan
✨ Patching ConfigMap cilium-config to enable Hubble...
🚀 Creating ConfigMap for Cilium version 1.12.1...
♻️  Restarted Cilium pods
⌛ Waiting for Cilium to become ready before deploying other Hubble component(s)...
🚀 Creating Peer Service...
✨ Generating certificates...
🔑 Generating certificates for Relay...
✨ Deploying Relay...
✨ Deploying Hubble UI and Hubble UI Backend...
⌛ Waiting for Hubble to be installed...
ℹ️  Storing helm values file in kube-system/cilium-cli-helm-values Secret
✅ Hubble was successfully enabled!

You’re now ready to launch the Hubble UI with the following command:

nicovibert:~$ cilium hubble ui
ℹ️  Opening "http://localhost:12000" in your browser...

A browser should launch with the Hubble UI. Select the default namespace for now.

Leave the terminal running and move to a new one where you’re going to deploy applications to generate some traffic flow.

Step 4: Deploy Applications

Let’s start by deploying a client, named pod-worker, with this simple Pod manifest. I use the netshoot image in this example but you can use other images if you prefer.

apiVersion: v1
kind: Pod
metadata:
  name: pod-worker
  labels:
    app: pod-worker
spec:
  nodeName: kind-worker
  containers:
  - name: netshoot
    image: nicolaka/netshoot:latest
    command: ["sleep", "infinite"]

Once you deploy it, notice it has two IP addresses allocated – IPv4 and IPv6. You can directly get the IPv6 address with this command.

$ kubectl get pod pod-worker -o jsonpath='{.status.podIPs[1].ip}'
fd00:10:244:3::ba17

Deploy another Pod (named pod-worker2) to verify successfully IPv6 connectivity.

apiVersion: v1
kind: Pod
metadata:
  name: pod-worker2
  labels:
    app: pod-worker2
spec:
  nodeName: kind-worker2
  containers:
  - name: netshoot
    image: nicolaka/netshoot:latest
    command: ["sleep", "infinite"]

Both pods are manually pinned to different hosts by using spec.nodeName. As a result, the successful ping below showed successful IPv6 connectivity between Pods on different nodes.

nicovibert:~$ IPv6=$(kubectl get pod pod-worker2 -o jsonpath='{.status.podIPs[1].ip}') 
nicovibert:~$ kubectl exec -it pod-worker -- ping $IPv6                               
PING fd00:10:244:1::3203(fd00:10:244:1::3203) 56 data bytes
64 bytes from fd00:10:244:1::3203: icmp_seq=1 ttl=63 time=2.93 ms
64 bytes from fd00:10:244:1::3203: icmp_seq=2 ttl=63 time=0.184 ms
64 bytes from fd00:10:244:1::3203: icmp_seq=3 ttl=63 time=0.171 ms
64 bytes from fd00:10:244:1::3203: icmp_seq=4 ttl=63 time=0.216 ms

You can now test Pod to Service connectivity. We’ll use an echo server (An echo server is a server that replicates the request sent by the client and sends it back).

You can use this manifest (link to GitHub) (a slightly modified and simplified version of this echo-server manifest). Notice the ipFamilyPolicy and ipFamilies Service settings required for IPv6 in this excerpt from the manifest:

--------------------------
apiVersion: v1
kind: Service
metadata:
  name: echoserver
spec:
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
  - IPv6
  - IPv4
--------------------------

Deploy it:

$ kubectl apply -f echo-kube-ipv6.yaml 
deployment.apps/echoserver created
service/echoserver created

Check the echoserver Service: you should see both IPv4 and IPv6 addresses allocated on the IPs line.

$ kubectl describe svc echoserver        
Name:              echoserver
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=echoserver
Type:              ClusterIP
IP Family Policy:  PreferDualStack
IP Families:       IPv6,IPv4
IP:                fd00:10:96::168a
IPs:               fd00:10:96::168a,10.96.62.117
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         [fd00:10:244:1::e192]:80,[fd00:10:244:2::852d]:80,[fd00:10:244:2::e645]:80 + 2 more...
Session Affinity:  None
Events:            <none>

AAAA records are assigned automatically to Services.

Now you can just use nslookup -q=AAAA to make an IPv6 DNS query.


nicovibert:~$ kubectl exec -it pod-worker -- nslookup -q=AAAA echoserver.default
Server:         10.96.0.10
Address:        10.96.0.10#53
echoserver.default.svc.cluster.local    has AAAA address fd00:10:96::168a
root@dnsutils:/# exit
exit

Finally, you can execute a curl request over IPv6 only, using the -6 option.

Both curl requests to the AAAA record or the IP address are executed successfully.

kubectl exec -it pod-worker -- bash
bash-5.1# curl --interface eth0 -g -6 'http://echoserver.default.svc'
{"host":{"hostname":"echoserver.default.svc","ip":"fd00:10:244:3::ba17","ips":[]},"http":{"method":"GET","baseUrl":"","originalUrl":"/","protocol":"http"},"request":{"params":{"0":"/"},"query":{},"cookies":{},"body":{},"headers":{"host":"echoserver.default.svc","user-agent":"curl/7.83.1","accept":"*/*"}},"environment":{"PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME":"echoserver-869f89668b-k8f9j", [...]

bash-5.1# curl --interface eth0 -g -6 'http://[fd00:10:96::168a]'
{"host":{"hostname":"[fd00:10:96::168a]","ip":"fd00:10:244:3::ba17","ips":[]},"http":{"method":"GET","baseUrl":"","originalUrl":"/","protocol":"http"},"request":{"params":{"0":"/"},"query":{},"cookies":{},"body":{},"headers":{"host":"[fd00:10:96::168a]","user-agent":"curl/7.83.1","accept":"*/*"}},"environment":{"PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME":"echoserver-869f89668b-g4rcp" [...]

Step 5: Verify Flows on Hubble

Let’s go back to the Hubble UI. You should be able to see all your flows. To narrow down the results, you can filter based on the name of the pod to only see the flows you are interested in.

Hopefully you, like me, find this pretty cool: you can troubleshoot IPv6 connectivity issues without having to remember 128-bit addresses!

If you update the columns like I did, you can see some fields that are hidden by default:

If you prefer using the CLI, then that’s absolutely fine. Stop the terminal where you were running cilium hubble ui and instead we’re going to be running hubble observe.

If you run a continuous IPv6 ping from pod-worker to pod-worker2, you can easily see these flows with hubble observe --ipv6 --from-pod pod-worker:

nicovibert:~$ hubble observe --ipv6 --from-pod pod-worker
Sep  7 15:11:18.288: default/pod-worker (ID:3211) -> default/pod-worker2 (ID:50760) to-overlay FORWARDED (ICMPv6 EchoRequest)
Sep  7 15:11:18.289: default/pod-worker (ID:3211) -> default/pod-worker2 (ID:50760) to-endpoint FORWARDED (ICMPv6 EchoRequest)
Sep  7 15:11:18.289: default/pod-worker (ID:3211) <- default/pod-worker2 (ID:50760) to-overlay FORWARDED (ICMPv6 EchoReply)
Sep  7 15:11:18.289: default/pod-worker (ID:3211) <- default/pod-worker2 (ID:50760) to-endpoint FORWARDED (ICMPv6 EchoReply)

You can even print the node where the Pods are running with the --print-node-name:

nicovibert:~$ hubble observe --ipv6 --from-pod pod-worker --print-node-name
Sep  7 15:18:26.033 [kind-kind/kind-worker]: default/pod-worker (ID:3211) -> default/pod-worker2 (ID:50760) to-overlay FORWARDED (ICMPv6 EchoRequest)
Sep  7 15:18:26.034 [kind-kind/kind-worker2]: default/pod-worker (ID:3211) -> default/pod-worker2 (ID:50760) to-endpoint FORWARDED (ICMPv6 EchoRequest)

You should see both HTTP and ICMPv6 flows (if not, simply re-run a curl from the pod-worker shell).

nicovibert:~$ hubble observe --ipv6 --from-pod pod-worker -o dict --ip-translation=false 
[----------------]
  TIMESTAMP: Sep  7 15:27:46.111
     SOURCE: fd00:10:244:3::ba17
DESTINATION: fd00:10:244:1::3203
       TYPE: to-endpoint
    VERDICT: FORWARDED
    SUMMARY: ICMPv6 EchoRequest
------------
  TIMESTAMP: Sep  7 15:27:46.111
     SOURCE: fd00:10:244:1::3203
DESTINATION: fd00:10:244:3::ba17
       TYPE: to-overlay
    VERDICT: FORWARDED
    SUMMARY: ICMPv6 EchoReply
------------
  TIMESTAMP: Sep  7 15:28:35.593
     SOURCE: [fd00:10:244:3::ba17]:48612
DESTINATION: [fd00:10:244:2::e645]:80
       TYPE: to-endpoint
    VERDICT: FORWARDED
    SUMMARY: TCP Flags: SYN
------------
  TIMESTAMP: Sep  7 15:28:35.593
     SOURCE: [fd00:10:244:3::ba17]:48612
DESTINATION: [fd00:10:244:2::e645]:80
       TYPE: to-endpoint
    VERDICT: FORWARDED
    SUMMARY: TCP Flags: ACK
[----------------]

Notice that, in the code output above, we had IPv6 addresses instead of the Pod name. By default, Hubble will translate IP address to logical names such as Pod name or FQDN. You can disable it if you want the source and destination IPv6 addresses by using the --ip-translation=false command.

nicovibert:~$ hubble observe --ipv6 --from-pod pod-worker -o dict --ip-translation=false --protocol ICMPv6
  TIMESTAMP: Sep  7 15:27:27.615
     SOURCE: fd00:10:244:3::ba17
DESTINATION: fd00:10:244:1::3203
       TYPE: to-overlay
    VERDICT: FORWARDED
    SUMMARY: ICMPv6 EchoRequest
------------
  TIMESTAMP: Sep  7 15:27:27.615
     SOURCE: fd00:10:244:3::ba17
DESTINATION: fd00:10:244:1::3203
       TYPE: to-endpoint
    VERDICT: FORWARDED
    SUMMARY: ICMPv6 EchoRequest

If you just want to see your ping messages, you can simply filter based on the protocol:

nicovibert:~$ hubble observe --ipv6 --from-pod pod-worker -o dict --ip-translation=false --protocol ICMPv6     
  TIMESTAMP: Sep  7 15:27:27.615
     SOURCE: fd00:10:244:3::ba17
DESTINATION: fd00:10:244:1::3203
       TYPE: to-endpoint
    VERDICT: FORWARDED
    SUMMARY: ICMPv6 EchoRequest
------------
  TIMESTAMP: Sep  7 15:27:27.615
     SOURCE: fd00:10:244:1::3203
DESTINATION: fd00:10:244:3::ba17
       TYPE: to-overlay
    VERDICT: FORWARDED
    SUMMARY: ICMPv6 EchoReply

And that’s it! Hopefully you can see how running IPv6 on Kubernetes does not need to be an operational nightmare, if you have the right tools in place.

Feel free to get in touch with us, to schedule an IPv6 demo of Isovalent Cilium Enterprise and learn about features such as Segment Routing v6 (SRv6).

Thanks for reading.

Learn More

Isovalent Resources:

Cilium and eBPF Resources:

Nico Vibert
AuthorNico VibertSenior Staff Technical Marketing Engineer

Industry insights you won’t delete. Delivered to your inbox weekly.