An increasing number of organizations are adopting IPv6 in their environments, driven by the public IPv4 space exhaustion, private IPv4 scarcity, especially within large-scale networks, and the need to provide service availability to IPv6-only clients. An intermediary step in fully supporting IPv6 is dual-stack IPv4/IPv6. Three years ago, 31% of Google users were using IPv6. We’re now up to 45%, and at the current rate, IPv6 will be the majority protocol seen by Google users worldwide (it’s already over 70% in countries like India and France) by the end of 2024. This blog post will walk you through how to deploy and upgrade an IPv4/IPv6 Dual Stack AKS (Azure Kubernetes Service) cluster with Cilium as the CNI to benefit from its networking, observability, and security capabilities.
What is Dual-Stack Networking in Kubernetes?
IPv4/IPv6 dual-stack networking enables the allocation of IPv4 and IPv6 addresses to Pods and Services. IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:
Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod).
IPv4 and IPv6 enabled Services.
Pod off-cluster egress routing (e.g., the Internet) via IPv4 and IPv6 interfaces.
Why do you need Dual-Stack networking?
The Service Provider Dilemma
Service providers and enterprises are faced with growing their networks using IPv6 while continuing to serve IPv4 customers.
Increasingly, the public side of network address translation (NAT) devices is IPv6 rather than IPv4. Service providers cannot continue giving customers globally routable IPv4 addresses, they cannot get new globally routable IPv4 addresses for expanding their networks, and yet they must continue to serve both IPv4 customers and new customers, all of whom are primarily trying to reach IPv4 destinations.
IPv4 and IPv6 must coexist for some number of years, and their coexistence must be transparent to end users. If an IPv4-to-IPv6 transition succeeds, end users should not notice it.
Other strategies exist, such as manually or dynamically configured tunnels and translation devices, but dual stacking is often the preferable solution in many scenarios. The dual-stacked device can interoperate equally with IPv4 devices, IPv6 devices, and other dual-stacked devices. When both devices are dual-stacked, the two devices agree on which IP version to use.
The Kubernetes perspective
While Kubernetes has dual-stack support, it depends on whether the network plugin/CNI supports it.
Kubernetes running on IPv4/IPv6 dual-stack networking allows workloads to access IPv4 and IPv6 endpoints natively without additional complexities or performance penalties.
Cluster operators can also choose to expose external endpoints using one or both address families in any order that fits their requirements.
Kubernetes does not make any strong assumptions about the network it runs on. For example, users running on a small IPv4 address space can choose to enable dual-stack on a subset of their cluster nodes and have the rest running on IPv6, which traditionally has a larger available address space.
How do you define Dual Stack Networking in AKS?
You can deploy your AKS clusters in a dual-stack mode using Overlay networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is configured so pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT’d to the node’s primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).
When will AKS clusters powered by Cilium have Dual Stack availability?
“When will Kubernetes have Dual Stack support?” This question has been asked with increasing frequency ever since alpha support for IPv6 was first added in Kubernetes v1.9. While Kubernetes has supported IPv6-only clusters since v1.18, migration from IPv4 to IPv6 was not possible. Eventually, dual-stack IPv4/IPv6 networking reached general availability (GA) in Kubernetes v1.23.
Starting Kubernetes 1.29 Azure Kubernetes Service announced the availability of Dual Stack on AKS clusters running Azure CNI powered by Cilium in preview mode.
What is Isovalent Enterprise for Cilium?
Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.
Why Isovalent Enterprise for Cilium?
For enterprise customers requiring support and usage of Advanced Networking, Security, and Observability features, “Isovalent Enterprise for Cilium” is recommended with the following benefits:
Advanced network policy: advanced network policy capabilities that enable fine-grained control over network traffic for micro-segmentation and improved security.
Hubble flow observability + User Interface: real-time network traffic flow, policy visualization, and a powerful User Interface for easy troubleshooting and network management.
Multi-cluster connectivity via Cluster Mesh: seamless networking and security across multiple cloud providers like AWS, Azure, Google, and on-premises environments.
Advanced Security Capabilities via Tetragon: Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats. Built on eBPF, Tetragon can easily scale to meet the needs of the most demanding cloud-native environments.
Service Mesh: Isovalent Cilium Enterprise provides sidecar-free, seamless service-to-service communication and advanced load balancing, making deploying and managing complex microservices architectures easy.
Enterprise-grade support: Enterprise-grade support from Isovalent’s experienced team of experts ensures that issues are resolved promptly and efficiently. Additionally, professional services help organizations deploy and manage Cilium in production environments.
Pre-Requisites
The following prerequisites need to be taken into account before you proceed with this tutorial:
An up-and-running Kubernetes cluster. If you don’t have one, you can create a cluster using one of these options:
Users can contact their partner Sales/SE representative(s) at sales@isovalent.com for more detailed insights into the features below and access the requisite documentation and hubble CLI software images.
How can you achieve Dual Stack functionality with Cilium?
You can either create new AKS clusters or upgrade your existing AKS clusters to get the best of both worlds.
Cilium’s high-performing eBPF data plane.
Dual Stack functionality from Azure with Overlay Networking.
You can achieve Dual Stack Networking with the following network plugin combinations (as of now):
Network Plugin
Default Nodepool OS (during AKS cluster creation)
Bring your own CNI (BYOCNI)
Azure Linux
Bring your own CNI (BYOCNI)
Ubuntu
Azure CNI (Powered by Cilium) -Overlay Mode
Ubuntu
Azure CNI (Powered by Cilium) -Overlay Mode
Azure Linux
Upgrade Azure CNI Overlay to Azure CNI powered by Cilium
Ubuntu
Upgrade a Kubenet cluster to Azure CNI powered by Cilium
Ubuntu
Note-
Read AZPC= Azure CNI Powered by Cilium
Read Azure Linux= AL
Read Overlay=OL
Read Azure CNI Overlay= ACO
Azure CNI powered by Cilium clusters created with Kubernetes come up by default with Cilium 1.14.x version ( managed by Microsoft).
In the case of BYOCNI, the tests were validated with Isovalent 1.14.x release.
To install Isovalent Enterprise for Cilium on AKS clusters with the network-plugin BYOCNI, contact sales@isovalent.com.
Application pods must be recreated after upgrading an AKS cluster from Azure CNI Overlay to Azure CNI powered by Cilium with IP family to IPv4 to Dual-Stack.
Application pods must be recreated after upgrading an AKS cluster from Kubenet to Azure CNI powered by Cilium with IP family to IPv4 to Dual-Stack.
Note- All tests below have been done on an AKS cluster with the network plugin as BYOCNI.
Create sample application(s)
Deploy the Pods
You can deploy clients that have dual-stack functionality. We will use the netshoot image in this example.
Once you deploy it, notice that two IP addresses have been allocated – IPv4 and IPv6. You can directly get the IPv6 and IPv4 addresses with this command.
kubectl get pod pod-worker -o jsonpath='{.status.podIPs[1].ip}'fd00::462
kubectl get pod pod-worker -o jsonpath='{.status.podIPs[0].ip}'10.0.4.141
Deploy another Pod (named pod1-worker1) to verify successfully IPv6 connectivity.
Both pods are manually pinned to different hosts by using spec.nodeName. As a result, the successful ping below showed successful IPv6 connectivity between Pods on different nodes.
IPv6=$(kubectl get pod pod-worker -o jsonpath='{.status.podIPs[1].ip}')kubectl exec -it pod1-worker1 -- ping$IPv6PING fd00::462 (fd00::462)56 data bytes
64 bytes from fd00::462: icmp_seq=1ttl=63time=1.22 ms
64 bytes from fd00::462: icmp_seq=2ttl=63time=0.566 ms
64 bytes from fd00::462: icmp_seq=3ttl=63time=0.416 ms
64 bytes from fd00::462: icmp_seq=4ttl=63time=0.473 ms
IPv6=$(kubectl get pod pod1-worker1 -o jsonpath='{.status.podIPs[1].ip}')kubectl exec -it pod-worker -- ping$IPv6PING fd00::54 (fd00::54)56 data bytes
64 bytes from fd00::54: icmp_seq=1ttl=63time=1.45 ms
64 bytes from fd00::54: icmp_seq=2ttl=63time=0.642 ms
64 bytes from fd00::54: icmp_seq=3ttl=63time=0.474 ms
64 bytes from fd00::54: icmp_seq=4ttl=63time=0.534 ms
IPv4=$(kubectl get pod pod1-worker1 -o jsonpath='{.status.podIPs[0].ip}')kubectl exec -it pod-worker -- ping$IPv4PING 10.0.0.146 (10.0.0.146)56(84) bytes of data.
64 bytes from 10.0.0.146: icmp_seq=1ttl=63time=1.51 ms
64 bytes from 10.0.0.146: icmp_seq=2ttl=63time=0.538 ms
64 bytes from 10.0.0.146: icmp_seq=3ttl=63time=0.540 ms
64 bytes from 10.0.0.146: icmp_seq=4ttl=63time=0.558 ms
IPv4=$(kubectl get pod pod-worker -o jsonpath='{.status.podIPs[0].ip}')kubectl exec -it pod1-worker1 -- ping$IPv4PING 10.0.4.141 (10.0.4.141)56(84) bytes of data.
64 bytes from 10.0.4.141: icmp_seq=1ttl=63time=0.445 ms
64 bytes from 10.0.4.141: icmp_seq=2ttl=63time=0.433 ms
64 bytes from 10.0.4.141: icmp_seq=3ttl=63time=0.393 ms
64 bytes from 10.0.4.141: icmp_seq=4ttl=63time=0.495 ms
Pod-to-Service connectivity
Use an echo server (An echo server is a server that replicates the request sent by the client and sends it back).
Expose the workload via service type LoadBalancer (Optional)- Before AKS 1.27
Before AKS 1.27, only the first IP address for a service will be provided to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, create two services targeting the same selector, one for IPv4 and one for IPv6.
Verify functionality via a command-line web request for both IPv4 and IPv6.
curl -I 13.70.187.148
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 07 May 202410:29:58 GMT
Content-Type: text/html
Content-Length: 615Last-Modified: Tue, 13 Jun 202315:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"Accept-Ranges: bytes
curl -g -6 'http://[2603:1010:200::1d1]:80/' -I
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 07 May 202410:30:06 GMT
Content-Type: text/html
Content-Length: 615Last-Modified: Tue, 13 Jun 202315:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"Accept-Ranges: bytes
Monitor IPv6 traffic flows with Hubble UI.
Note- To obtain the helm values to install Hubble UI and access the Enterprise documentation, you need to reach out to sales@isovalent.com and support@isovalent.com
Hubble-UI (Enterprise documentation) is enabled via helm charts. Once the installation is complete, you will notice hubble-ui pods are up and running:
kubectl get pods -n hubble-ui
NAME READY STATUS RESTARTS AGE
hubble-ui-6d964f9779-gfqr7 2/2 Running 0 20h
Validate the installation and verify the flows on Hubble
To access Hubble UI, forward a local port to the Hubble UI service:
You should be able to see all your flows. To narrow down the results, you can filter based on the pod’s name to only see the flows you are interested in.
Monitor IPv6 traffic flows with Hubble CLI
Hubble’s CLI extends the visibility that is provided by standard kubectl commands like kubectl get pods to give you more network-level details about a request, such as its status and the security identities associated with its source and destination.
The Hubble CLI can be leveraged to observe network flows from Cilium agents. Users can observe the flows from their local machine workstation for troubleshooting or monitoring.
Use the kubectl port forward to hubble-relay, then edit the hubble config to point at the remote hubble server component.
Hubble Status
Hubble status can check the overall health of Hubble within your cluster. If using Hubble Relay, a counter for the number of connected nodes will appear in the last line of the output.
hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 12,285/12,285 (100.00%)Flows/s: 22.37Connected Nodes: 3/3
Hopefully, this post gave you a good overview of how to deploy and upgrade an IPv4/IPv6 Dual Stack AKS (Azure Kubernetes Service) cluster with Cilium as the CNI to benefit from its networking, observability, and security capabilities. If you’d like to learn more, you can schedule a demo with our experts.
Amit Gupta is a senior technical marketing engineer at Isovalent, powering eBPF cloud-native networking and security. Amit has 21+ years of experience in Networking, Telecommunications, Cloud, Security, and Open-Source. He has previously worked with Motorola, Juniper, Avi Networks (acquired by VMware), and Prosimo. He is keen to learn and try out new technologies that aid in solving day-to-day problems for operators and customers.
He has worked in the Indian start-up ecosystem for a long time and helps new folks in that area outside of work. Amit is an avid runner and cyclist and also spends considerable time helping kids in orphanages.
In this tutorial, users will learn how to deploy Isovalent Enterprise for Cilium on your AKS cluster from Azure Marketplace on a new cluster and also upgrade an existing cluster from an AKS cluster running Azure CNI powered by Cilium to Isovalent Enterprise for Cilium.
In this tutorial, you will learn how to enable Enterprise features (Layer-3, 4 & 7 policies, DNS-based policies, and observe the Network Flows using Hubble-CLI) in an Azure Kubernetes Service (AKS) cluster running Isovalent Enterprise for Cilium.