![Dual Stack on AKS with Cilium](/static/204a5f967efe518f9bb0d606ff411097/d126e/isovalent-social-image-1200x630%402x.png)
An increasing number of organizations are adopting IPv6 in their environments, driven by the public IPv4 space exhaustion, private IPv4 scarcity, especially within large-scale networks, and the need to provide service availability to IPv6-only clients. An intermediary step in fully supporting IPv6 is dual-stack IPv4/IPv6. Three years ago, 31% of Google users were using IPv6. We’re now up to 45%, and at the current rate, IPv6 will be the majority protocol seen by Google users worldwide (it’s already over 70% in countries like India and France) by the end of 2024. This blog post will walk you through how to deploy and upgrade an IPv4/IPv6 Dual Stack AKS (Azure Kubernetes Service) cluster with Cilium as the CNI to benefit from its networking, observability, and security capabilities.
What is Dual-Stack Networking in Kubernetes?
IPv4/IPv6 dual-stack networking enables the allocation of IPv4 and IPv6 addresses to Pods and Services. IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:
- Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod).
- IPv4 and IPv6 enabled Services.
- Pod off-cluster egress routing (e.g., the Internet) via IPv4 and IPv6 interfaces.
Why do you need Dual-Stack networking?
The Service Provider Dilemma
- Service providers and enterprises are faced with growing their networks using IPv6 while continuing to serve IPv4 customers.
- Increasingly, the public side of network address translation (NAT) devices is IPv6 rather than IPv4. Service providers cannot continue giving customers globally routable IPv4 addresses, they cannot get new globally routable IPv4 addresses for expanding their networks, and yet they must continue to serve both IPv4 customers and new customers, all of whom are primarily trying to reach IPv4 destinations.
- IPv4 and IPv6 must coexist for some number of years, and their coexistence must be transparent to end users. If an IPv4-to-IPv6 transition succeeds, end users should not notice it.
- Other strategies exist, such as manually or dynamically configured tunnels and translation devices, but dual stacking is often the preferable solution in many scenarios. The dual-stacked device can interoperate equally with IPv4 devices, IPv6 devices, and other dual-stacked devices. When both devices are dual-stacked, the two devices agree on which IP version to use.
The Kubernetes perspective
- While Kubernetes has dual-stack support, it depends on whether the network plugin/CNI supports it.
- Kubernetes running on IPv4/IPv6 dual-stack networking allows workloads to access IPv4 and IPv6 endpoints natively without additional complexities or performance penalties.
- Cluster operators can also choose to expose external endpoints using one or both address families in any order that fits their requirements.
- Kubernetes does not make any strong assumptions about the network it runs on. For example, users running on a small IPv4 address space can choose to enable dual-stack on a subset of their cluster nodes and have the rest running on IPv6, which traditionally has a larger available address space.
How do you define Dual Stack Networking in AKS?
You can deploy your AKS clusters in a dual-stack mode using Overlay networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is configured so pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT’d to the node’s primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).
When will AKS clusters powered by Cilium have Dual Stack availability?
“When will Kubernetes have Dual Stack support?” This question has been asked with increasing frequency ever since alpha support for IPv6 was first added in Kubernetes v1.9. While Kubernetes has supported IPv6-only clusters since v1.18, migration from IPv4 to IPv6 was not possible. Eventually, dual-stack IPv4/IPv6 networking reached general availability (GA) in Kubernetes v1.23.
Starting Kubernetes 1.29 Azure Kubernetes Service announced the availability of Dual Stack on AKS clusters running Azure CNI powered by Cilium in preview mode.
What is Isovalent Enterprise for Cilium?
Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.
Why Isovalent Enterprise for Cilium?
For enterprise customers requiring support and usage of Advanced Networking, Security, and Observability features, “Isovalent Enterprise for Cilium” is recommended with the following benefits:
- Advanced network policy: advanced network policy capabilities that enable fine-grained control over network traffic for micro-segmentation and improved security.
- Hubble flow observability + User Interface: real-time network traffic flow, policy visualization, and a powerful User Interface for easy troubleshooting and network management.
- Multi-cluster connectivity via Cluster Mesh: seamless networking and security across multiple cloud providers like AWS, Azure, Google, and on-premises environments.
- Advanced Security Capabilities via Tetragon: Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats. Built on eBPF, Tetragon can easily scale to meet the needs of the most demanding cloud-native environments.
- Service Mesh: Isovalent Cilium Enterprise provides sidecar-free, seamless service-to-service communication and advanced load balancing, making deploying and managing complex microservices architectures easy.
- Enterprise-grade support: Enterprise-grade support from Isovalent’s experienced team of experts ensures that issues are resolved promptly and efficiently. Additionally, professional services help organizations deploy and manage Cilium in production environments.
Pre-Requisites
The following prerequisites need to be taken into account before you proceed with this tutorial:
- An up-and-running Kubernetes cluster. If you don’t have one, you can create a cluster using one of these options:
- The following dependencies should be installed:
- Install Azure CLI.
- You should have an Azure Subscription.
- Install kubectl.
- Install Cilium CLI.
- Install Helm.
- Users can contact their partner Sales/SE representative(s) at sales@isovalent.com for more detailed insights into the features below and access the requisite documentation and hubble CLI software images.
How can you achieve Dual Stack functionality with Cilium?
You can either create new AKS clusters or upgrade your existing AKS clusters to get the best of both worlds.
- Cilium’s high-performing eBPF data plane.
- Dual Stack functionality from Azure with Overlay Networking.
You can achieve Dual Stack Networking with the following network plugin combinations (as of now):
Network Plugin | Default Nodepool OS (during AKS cluster creation) |
Bring your own CNI (BYOCNI) | Azure Linux |
Bring your own CNI (BYOCNI) | Ubuntu |
Azure CNI (Powered by Cilium) -Overlay Mode | Ubuntu |
Azure CNI (Powered by Cilium) -Overlay Mode | Azure Linux |
Upgrade Azure CNI Overlay to Azure CNI powered by Cilium | Ubuntu |
Upgrade a Kubenet cluster to Azure CNI powered by Cilium | Ubuntu |
Note-
- Read AZPC= Azure CNI Powered by Cilium
- Read Azure Linux= AL
- Read Overlay=OL
- Read Azure CNI Overlay= ACO
- Azure CNI powered by Cilium clusters created with Kubernetes come up by default with Cilium 1.14.x version ( managed by Microsoft).
- In the case of BYOCNI, the tests were validated with Isovalent 1.14.x release.
- To install Isovalent Enterprise for Cilium on AKS clusters with the network-plugin
BYOCNI
, contact sales@isovalent.com. - Application pods must be recreated after upgrading an AKS cluster from Azure CNI Overlay to Azure CNI powered by Cilium with IP family to IPv4 to Dual-Stack.
- Application pods must be recreated after upgrading an AKS cluster from Kubenet to Azure CNI powered by Cilium with IP family to IPv4 to Dual-Stack.
Create an AKS cluster with BYOCNI in dual-stack mode.
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
- Replace
SubscriptionName
with your subscription name. - You can also use your subscription ID instead of your subscription name.
AKS Resource Group Creation
Create a Resource Group
AKS Cluster creation
Pass the --network-plugin
parameter with the parameter value of none
and ip-families set to IPv4 and IPv6.
Set the Kubernetes Context
Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created (AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.
Cluster status check
Check the status of the nodes and make sure they are in a “Ready” state and that the nodes have IPv6 and IPv4 addresses.
Install Isovalent Enteprise for Cilium
- Users can contact their partner Sales/SE representative(s) at sales@isovalent.com to access the requisite documentation and how to install Isovalent Enteprpise for Cilium on an AKS cluster with BYOCNI as the network plugin.
Validate Cilium version
Check the version of cilium with cilium version
:
Cilium Health Check
cilium-health
is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. You can check node-to-node health with cilium-health status:
Note- All tests below have been done on an AKS cluster with the network plugin as BYOCNI
.
Create sample application(s)
Deploy the Pods
- You can deploy clients that have dual-stack functionality. We will use the netshoot image in this example.
- Once you deploy it, notice that two IP addresses have been allocated – IPv4 and IPv6. You can directly get the IPv6 and IPv4 addresses with this command.
- Deploy another Pod (named
pod1-worker1
) to verify successfully IPv6 connectivity.
Verify IPv6 connectivity
Pod-to-Pod connectivity
- Both pods are manually pinned to different hosts by using
spec.nodeName
. As a result, the successful ping below showed successful IPv6 connectivity between Pods on different nodes.
Pod-to-Service connectivity
- Use an echo server (An echo server is a server that replicates the request sent by the client and sends it back).
- Deploy it:
- The
echoserver
Service should have both IPv4 and IPv6 addresses.
AAAA queries
- AAAA records are assigned automatically to Services. You can do an
nslookup -q=AAAA
to make an IPv6 DNS query.
HTTP requests
- curl requests to the AAAA record or the IP address should be executed successfully.
Expose the workload via service type LoadBalancer
(Optional)- Before AKS 1.27
Before AKS 1.27, only the first IP address for a service will be provided to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, create two services targeting the same selector, one for IPv4 and one for IPv6.
- Create and deploy an NGINX deployment.
- Expose the deployment with service of type,
LoadBalancer
.
Note- To provide a dual-stack service for a single deployment, create two services targeting the same selector, one for IPv4 and one for IPv6.
- Check that both services have been deployed successfully.
- Verify functionality via a command-line web request.
Expose the workload via service type LoadBalancer
(Optional)- After AKS 1.27
Starting in AKS v1.27, you can create a dual-stack LoadBalancer service, which will provide 1 IPv4 public IP and 1 IPv6 public IP.
- Create and deploy an NGINX deployment with a corresponding dual-stack service of type
LoadBalancer
.
- Check that the service has been deployed successfully.
- Verify functionality via a command-line web request for both IPv4 and IPv6.
Monitor IPv6 traffic flows with Hubble UI.
Note- To obtain the helm values to install Hubble UI and access the Enterprise documentation, you need to reach out to sales@isovalent.com and support@isovalent.com
Hubble-UI (Enterprise documentation) is enabled via helm charts. Once the installation is complete, you will notice hubble-ui pods are up and running:
Validate the installation and verify the flows on Hubble
- To access Hubble UI, forward a local port to the Hubble UI service:
- Then, open http://localhost:12000 in your browser.
- You should be able to see all your flows. To narrow down the results, you can filter based on the pod’s name to only see the flows you are interested in.
![](https://isovalent.wpengine.com/wp-content/uploads/2024/04/Screenshot-2024-04-12-at-11.11.44-1024x422.png)
Monitor IPv6 traffic flows with Hubble CLI
Hubble’s CLI extends the visibility that is provided by standard kubectl commands like kubectl get pods
to give you more network-level details about a request, such as its status and the security identities associated with its source and destination.
The Hubble CLI can be leveraged to observe network flows from Cilium agents. Users can observe the flows from their local machine workstation for troubleshooting or monitoring.
Setup Hubble Relay Forwarding
Use the kubectl port forward to hubble-relay, then edit the hubble config to point at the remote hubble server component.
Hubble Status
Hubble status can check the overall health of Hubble within your cluster. If using Hubble Relay, a counter for the number of connected nodes will appear in the last line of the output.
View the flows in Hubble CLI
- Traffic from
pod-worker
topod1-worker1
- Print the node where the pods are running with the
--print-node-name
- View HTTP and ICMPv6 Flows
Conclusion
Hopefully, this post gave you a good overview of how to deploy and upgrade an IPv4/IPv6 Dual Stack AKS (Azure Kubernetes Service) cluster with Cilium as the CNI to benefit from its networking, observability, and security capabilities. If you’d like to learn more, you can schedule a demo with our experts.
Try it out
Start with the IPv6 lab and see how to enable Dual Stack in your enterprise environment.
Further Reading
![Amit Gupta](/static/8a842af1e0697f6c899dfb2cd1997096/35f13/WhatsApp-Image-2023-02-25-at-12.56.48.jpg)
Amit Gupta is a senior technical marketing engineer at Isovalent, powering eBPF cloud-native networking and security. Amit has 22+ years of experience in Networking, Telecommunications, Cloud, Security, and Open-Source. He has previously worked with Motorola, Juniper, Avi Networks (acquired by VMware), and Prosimo. He is keen to learn and try out new technologies that aid in solving day-to-day problems for operators and customers.
He has worked in the Indian start-up ecosystem for a long time and helps new folks in that area outside of work. Amit is an avid runner and cyclist and also spends considerable time helping kids in orphanages.