Back to blog

Dual Stack on AKS with Cilium

Amit Gupta
Amit Gupta
Published: Updated: Cilium
Dual Stack on AKS with Cilium

An increasing number of organizations are adopting IPv6 in their environments, driven by the public IPv4 space exhaustion, private IPv4 scarcity, especially within large-scale networks, and the need to provide service availability to IPv6-only clients. An intermediary step in fully supporting IPv6 is dual-stack IPv4/IPv6. Three years ago, 31% of Google users were using IPv6. We’re now up to 45%, and at the current rate, IPv6 will be the majority protocol seen by Google users worldwide (it’s already over 70% in countries like India and France) by the end of 2024. This blog post will walk you through how to deploy and upgrade an IPv4/IPv6 Dual Stack AKS (Azure Kubernetes Service) cluster with Cilium as the CNI to benefit from its networking, observability, and security capabilities.

What is Dual-Stack Networking in Kubernetes?

IPv4/IPv6 dual-stack networking enables the allocation of IPv4 and IPv6 addresses to Pods and Services. IPv4/IPv6 dual-stack on your Kubernetes cluster provides the following features:

  • Dual-stack Pod networking (a single IPv4 and IPv6 address assignment per Pod).
  • IPv4 and IPv6 enabled Services.
  • Pod off-cluster egress routing (e.g., the Internet) via IPv4 and IPv6 interfaces.

Why do you need Dual-Stack networking?

The Service Provider Dilemma

  • Service providers and enterprises are faced with growing their networks using IPv6 while continuing to serve IPv4 customers.
  • Increasingly, the public side of network address translation (NAT) devices is IPv6 rather than IPv4. Service providers cannot continue giving customers globally routable IPv4 addresses, they cannot get new globally routable IPv4 addresses for expanding their networks, and yet they must continue to serve both IPv4 customers and new customers, all of whom are primarily trying to reach IPv4 destinations.
  • IPv4 and IPv6 must coexist for some number of years, and their coexistence must be transparent to end users. If an IPv4-to-IPv6 transition succeeds, end users should not notice it.
  • Other strategies exist, such as manually or dynamically configured tunnels and translation devices, but dual stacking is often the preferable solution in many scenarios. The dual-stacked device can interoperate equally with IPv4 devices, IPv6 devices, and other dual-stacked devices. When both devices are dual-stacked, the two devices agree on which IP version to use.

The Kubernetes perspective

  • While Kubernetes has dual-stack support, it depends on whether the network plugin/CNI supports it.
  • Kubernetes running on IPv4/IPv6 dual-stack networking allows workloads to access IPv4 and IPv6 endpoints natively without additional complexities or performance penalties.
  • Cluster operators can also choose to expose external endpoints using one or both address families in any order that fits their requirements.
  • Kubernetes does not make any strong assumptions about the network it runs on. For example, users running on a small IPv4 address space can choose to enable dual-stack on a subset of their cluster nodes and have the rest running on IPv6, which traditionally has a larger available address space.

How do you define Dual Stack Networking in AKS?

You can deploy your AKS clusters in a dual-stack mode using Overlay networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is configured so pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT’d to the node’s primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).

When will AKS clusters powered by Cilium have Dual Stack availability?

“When will Kubernetes have Dual Stack support?” This question has been asked with increasing frequency ever since alpha support for IPv6 was first added in Kubernetes v1.9. While Kubernetes has supported IPv6-only clusters since v1.18, migration from IPv4 to IPv6 was not possible. Eventually, dual-stack IPv4/IPv6 networking reached general availability (GA) in Kubernetes v1.23.

Starting Kubernetes 1.29 Azure Kubernetes Service announced the availability of Dual Stack on AKS clusters running Azure CNI powered by Cilium in preview mode.

What is Isovalent Enterprise for Cilium?

Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects CiliumHubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.

Why Isovalent Enterprise for Cilium?

For enterprise customers requiring support and usage of Advanced NetworkingSecurity, and Observability features, “Isovalent Enterprise for Cilium” is recommended with the following benefits:

  • Advanced network policy: advanced network policy capabilities that enable fine-grained control over network traffic for micro-segmentation and improved security.
  • Hubble flow observability + User Interface: real-time network traffic flow, policy visualization, and a powerful User Interface for easy troubleshooting and network management.
  • Multi-cluster connectivity via Cluster Mesh: seamless networking and security across multiple cloud providers like AWS, Azure, Google, and on-premises environments.
  • Advanced Security Capabilities via Tetragon: Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats. Built on eBPF, Tetragon can easily scale to meet the needs of the most demanding cloud-native environments.
  • Service Mesh: Isovalent Cilium Enterprise provides sidecar-free, seamless service-to-service communication and advanced load balancing, making deploying and managing complex microservices architectures easy.
  • Enterprise-grade support: Enterprise-grade support from Isovalent’s experienced team of experts ensures that issues are resolved promptly and efficiently. Additionally, professional services help organizations deploy and manage Cilium in production environments.

Pre-Requisites

The following prerequisites need to be taken into account before you proceed with this tutorial:

  • An up-and-running Kubernetes cluster. If you don’t have one, you can create a cluster using one of these options:
  • The following dependencies should be installed:
  • Users can contact their partner Sales/SE representative(s) at sales@isovalent.com for more detailed insights into the features below and access the requisite documentation and hubble CLI software images.

How can you achieve Dual Stack functionality with Cilium?

You can either create new AKS clusters or upgrade your existing AKS clusters to get the best of both worlds.

  • Cilium’s high-performing eBPF data plane.
  • Dual Stack functionality from Azure with Overlay Networking.

You can achieve Dual Stack Networking with the following network plugin combinations (as of now):

Network PluginDefault Nodepool OS
(during AKS cluster creation)
Bring your own CNI (BYOCNI)Azure Linux
Bring your own CNI (BYOCNI)Ubuntu
Azure CNI (Powered by Cilium)
-Overlay Mode
Ubuntu
Azure CNI (Powered by Cilium)
-Overlay Mode
Azure Linux
Upgrade Azure CNI Overlay to
Azure CNI powered by Cilium
Ubuntu
Upgrade a Kubenet cluster to
Azure CNI powered by Cilium
Ubuntu

Note-

  • Read AZPC= Azure CNI Powered by Cilium
  • Read Azure Linux= AL
  • Read Overlay=OL
  • Read Azure CNI Overlay= ACO
  • Azure CNI powered by Cilium clusters created with Kubernetes come up by default with Cilium 1.14.x version ( managed by Microsoft).
  • In the case of BYOCNI, the tests were validated with Isovalent 1.14.x release.
  • To install Isovalent Enterprise for Cilium on AKS clusters with the network-plugin BYOCNI, contact sales@isovalent.com.
  • Application pods must be recreated after upgrading an AKS cluster from Azure CNI Overlay to Azure CNI powered by Cilium with IP family to IPv4 to Dual-Stack.
  • Application pods must be recreated after upgrading an AKS cluster from Kubenet to Azure CNI powered by Cilium with IP family to IPv4 to Dual-Stack.

Create an AKS cluster with BYOCNI in dual-stack mode.

Set the subscription

Choose the subscription you want to use if you have multiple Azure subscriptions.

  • Replace SubscriptionName with your subscription name.
  • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName

AKS Resource Group Creation

Create a Resource Group

clusterName="byocnids"
resourceGroup="byocnids"
vnet="byocnids"
location="southindia"

az group create --name $resourceGroup --location $location

AKS Cluster creation

Pass the --network-plugin parameter with the parameter value of none and ip-families set to IPv4 and IPv6.

az aks create --name $clusterName --resource-group $resourceGroup \
--network-plugin none \
--ip-families ipv4,ipv6 \
--kubernetes-version 1.29

Set the Kubernetes Context

Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created (AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.

az aks get-credentials --name $clusterName --resource-group $resourceGroup

Cluster status check

Check the status of the nodes and make sure they are in a “Ready” state and that the nodes have IPv6 and IPv4 addresses.

kubectl get nodes -o wide

NAME                                STATUS   ROLES    AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-23195171-vmss000000   Ready    <none>   2d14h   v1.29.0   10.224.0.6    <none>        Ubuntu 22.04.4 LTS   5.15.0-1058-azure   containerd://1.7.7-1
aks-nodepool1-23195171-vmss000001   Ready    <none>   2d14h   v1.29.0   10.224.0.4    <none>        Ubuntu 22.04.4 LTS   5.15.0-1058-azure   containerd://1.7.7-1
aks-nodepool1-23195171-vmss000002   Ready    <none>   2d14h   v1.29.0   10.224.0.5    <none>        Ubuntu 22.04.4 LTS   5.15.0-1058-azure   containerd://1.7.7-1

kubectl describe nodes | grep -E 'InternalIP'

  InternalIP:  10.224.0.6
  InternalIP:  fd1c:466e:bfa3:9c20::6
  InternalIP:  10.224.0.4
  InternalIP:  fd1c:466e:bfa3:9c20::4
  InternalIP:  10.224.0.5
  InternalIP:  fd1c:466e:bfa3:9c20::5

Install Isovalent Enteprise for Cilium

Validate Cilium version

Check the version of cilium with cilium version:

kubectl -n kube-system exec ds/cilium -- cilium status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), wait-for-node-init (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.29 (v1.29.0) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    False   [eth0  (Direct Routing)]
Host firewall:           Disabled
CNI Chaining:            none
Cilium:                  Ok   1.14.9-cee.1 (v1.14.9-cee.1-5da6625d)
NodeMonitor:             Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok
IPAM:                    IPv4: 7/254 allocated from 10.0.1.0/24, IPv6: 7/254 allocated from fd00::100/120
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Enabled]
Controller Status:       49/49 healthy
Proxy Status:            OK, ip 10.0.1.169, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 6.60   Metrics: Disabled
Encryption:              Disabled
Cluster health:          3/3 reachable   (2024-04-08T10:19:59Z)

Cilium Health Check

cilium-health is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. You can check node-to-node health with cilium-health status:

kubectl -n kube-system exec ds/cilium -- cilium-health status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), wait-for-node-init (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2024-04-08T10:19:59Z
Nodes:
  aks-nodepool1-23195171-vmss000002 (localhost):
    Host connectivity to 10.224.0.5:
      ICMP to stack:   OK, RTT=1.93953ms
      HTTP to agent:   OK, RTT=276.204µs
    Endpoint connectivity to 10.0.1.197:
      ICMP to stack:   OK, RTT=1.94283ms
      HTTP to agent:   OK, RTT=289.504µs
  aks-nodepool1-23195171-vmss000000:
    Host connectivity to 10.224.0.6:
      ICMP to stack:   OK, RTT=1.92883ms
      HTTP to agent:   OK, RTT=331.806µs
    Endpoint connectivity to 10.0.4.10:
      ICMP to stack:   OK, RTT=1.94753ms
      HTTP to agent:   OK, RTT=621.91µs
  aks-nodepool1-23195171-vmss000001:
    Host connectivity to 10.224.0.4:
      ICMP to stack:   OK, RTT=2.163234ms
      HTTP to agent:   OK, RTT=426.707µs
    Endpoint connectivity to 10.0.0.90:
      ICMP to stack:   OK, RTT=1.94633ms
      HTTP to agent:   OK, RTT=461.407µs

Note- All tests below have been done on an AKS cluster with the network plugin as BYOCNI.

Create sample application(s)

Deploy the Pods

  • You can deploy clients that have dual-stack functionality. We will use the netshoot image in this example.
apiVersion: v1
kind: Pod
metadata:
  name: pod-worker
  labels:
    app: pod-worker
spec:
  nodeName: aks-nodepool1-23195171-vmss000000
  containers:
  - name: netshoot
    image: nicolaka/netshoot:latest
    command: ["sleep", "infinite"]
  • Once you deploy it, notice that two IP addresses have been allocated – IPv4 and IPv6. You can directly get the IPv6 and IPv4 addresses with this command.
kubectl get pod pod-worker -o jsonpath='{.status.podIPs[1].ip}'
fd00::462

kubectl get pod pod-worker -o jsonpath='{.status.podIPs[0].ip}'
10.0.4.141
  • Deploy another Pod (named pod1-worker1) to verify successfully IPv6 connectivity.
apiVersion: v1
kind: Pod
metadata:
  name: pod1-worker1
  labels:
    app: pod1-worker1
spec:
  nodeName: aks-nodepool1-23195171-vmss000001
  containers:
  - name: netshoot
    image: nicolaka/netshoot:latest
    command: ["sleep", "infinite"]

Verify IPv6 connectivity

Pod-to-Pod connectivity

  • Both pods are manually pinned to different hosts by using spec.nodeName. As a result, the successful ping below showed successful IPv6 connectivity between Pods on different nodes.
IPv6=$(kubectl get pod pod-worker -o jsonpath='{.status.podIPs[1].ip}')
kubectl exec -it pod1-worker1 -- ping $IPv6
PING fd00::462 (fd00::462) 56 data bytes
64 bytes from fd00::462: icmp_seq=1 ttl=63 time=1.22 ms
64 bytes from fd00::462: icmp_seq=2 ttl=63 time=0.566 ms
64 bytes from fd00::462: icmp_seq=3 ttl=63 time=0.416 ms
64 bytes from fd00::462: icmp_seq=4 ttl=63 time=0.473 ms

IPv6=$(kubectl get pod pod1-worker1 -o jsonpath='{.status.podIPs[1].ip}')
kubectl exec -it pod-worker -- ping $IPv6
PING fd00::54 (fd00::54) 56 data bytes
64 bytes from fd00::54: icmp_seq=1 ttl=63 time=1.45 ms
64 bytes from fd00::54: icmp_seq=2 ttl=63 time=0.642 ms
64 bytes from fd00::54: icmp_seq=3 ttl=63 time=0.474 ms
64 bytes from fd00::54: icmp_seq=4 ttl=63 time=0.534 ms

IPv4=$(kubectl get pod pod1-worker1 -o jsonpath='{.status.podIPs[0].ip}')
kubectl exec -it pod-worker -- ping $IPv4
PING 10.0.0.146 (10.0.0.146) 56(84) bytes of data.
64 bytes from 10.0.0.146: icmp_seq=1 ttl=63 time=1.51 ms
64 bytes from 10.0.0.146: icmp_seq=2 ttl=63 time=0.538 ms
64 bytes from 10.0.0.146: icmp_seq=3 ttl=63 time=0.540 ms
64 bytes from 10.0.0.146: icmp_seq=4 ttl=63 time=0.558 ms

IPv4=$(kubectl get pod pod-worker -o jsonpath='{.status.podIPs[0].ip}')
kubectl exec -it pod1-worker1 -- ping $IPv4
PING 10.0.4.141 (10.0.4.141) 56(84) bytes of data.
64 bytes from 10.0.4.141: icmp_seq=1 ttl=63 time=0.445 ms
64 bytes from 10.0.4.141: icmp_seq=2 ttl=63 time=0.433 ms
64 bytes from 10.0.4.141: icmp_seq=3 ttl=63 time=0.393 ms
64 bytes from 10.0.4.141: icmp_seq=4 ttl=63 time=0.495 ms

Pod-to-Service connectivity

  • Use an echo server (An echo server is a server that replicates the request sent by the client and sends it back).
apiVersion: apps/v1
kind: Deployment
metadata:
  name: echoserver
spec:
  replicas: 2
  selector:
    matchLabels:
      app: echoserver
  template:
    metadata:
      labels:
        app: echoserver
    spec:
      containers:
      - image: ealen/echo-server:latest
        imagePullPolicy: IfNotPresent
        name: echoserver
        ports:
        - containerPort: 80
        env:
        - name: PORT
          value: "80"
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver
spec:
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
  - IPv6
  - IPv4
  ports:
    - port: 80
      targetPort: 80
      protocol: TCP
  type: ClusterIP
  selector:
    app: echoserver
  • Deploy it:
kubectl apply -f echo-kube-ipv6.yaml
deployment.apps/echoserver created
service/echoserver created
  • The echoserver Service should have both IPv4 and IPv6 addresses.
kubectl describe svc echoserver
Name:              echoserver
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=echoserver
Type:              ClusterIP
IP Family Policy:  PreferDualStack
IP Families:       IPv6,IPv4
IP:                fd5d:bc46:aeb0:8871::6d96
IPs:               fd5d:bc46:aeb0:8871::6d96,10.0.77.141
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         [fd00::15b]:80,[fd00::a0]:80
Session Affinity:  None
Events:            <none>

AAAA queries

  • AAAA records are assigned automatically to Services. You can do an nslookup -q=AAAA to make an IPv6 DNS query.
kubectl exec -it pod1-worker1 -- nslookup -q=AAAA echoserver.default
Server:		10.0.0.10
Address:	10.0.0.10#53

Name:	echoserver.default.svc.cluster.local
Address: fd5d:bc46:aeb0:8871::6d96

kubectl exec -it pod-worker -- nslookup -q=AAAA echoserver.default
Server:		10.0.0.10
Address:	10.0.0.10#53

Name:	echoserver.default.svc.cluster.local
Address: fd5d:bc46:aeb0:8871::6d96

HTTP requests

  • curl requests to the AAAA record or the IP address should be executed successfully.

pod-worker:~#curl --interface eth0 -g -6 'http://echoserver.default.svc'
{"host":{"hostname":"echoserver.default.svc","ip":"fd00::462","ips":[]},"http":{"method":"GET","baseUrl":"","originalUrl":"/","protocol":"http"},"request":{"params":{"0":"/"},"query":{},"cookies":{},"body":{},"headers":{"host":"echoserver.default.svc","user-agent":"curl/8.6.0","accept":"*/*"}},"environment":{"PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME":"echoserver-5c96cdb7b5-54j7g","NODE_VERSION":"20.11.0","YARN_VERSION":"1.22.19","PORT":"80","KUBERNETES_SERVICE_PORT":"443","ECHOSERVER_PORT_80_TCP":"tcp://[fd5d:bc46:aeb0:8871::6d96]:80","ECHOSERVER_PORT_80_TCP_PROTO":"tcp","KUBERNETES_PORT_443_TCP_PROTO":"tcp","ECHOSERVER_SERVICE_HOST":"fd5d:bc46:aeb0:8871::6d96","ECHOSERVER_PORT_80_TCP_ADDR":"fd5d:bc46:aeb0:8871::6d96","KUBERNETES_SERVICE_HOST":"10.0.0.1","KUBERNETES_SERVICE_PORT_HTTPS":"443","KUBERNETES_PORT":"tcp://10.0.0.1:443","KUBERNETES_PORT_443_TCP":"tcp://10.0.0.1:443","ECHOSERVER_SERVICE_PORT":"80","ECHOSERVER_PORT":"tcp://[fd5d:bc46:aeb0:8871::6d96]:80","ECHOSERVER_PORT_80_TCP_PORT":"80","KUBERNETES_PORT_443_TCP_PORT":"443","KUBERNETES_PORT_443_TCP_ADDR":"10.0.0.1","HOME":"/root"}}pod-worker:~#

pod-worker:~# curl --interface eth0 -g -6 'http://[fd5d:bc46:aeb0:8871::6d96]'
{"host":{"hostname":"[fd5d:bc46:aeb0:8871::6d96]","ip":"fd00::462","ips":[]},"http":{"method":"GET","baseUrl":"","originalUrl":"/","protocol":"http"},"request":{"params":{"0":"/"},"query":{},"cookies":{},"body":{},"headers":{"host":"[fd5d:bc46:aeb0:8871::6d96]","user-agent":"curl/8.6.0","accept":"*/*"}},"environment":{"PATH":"/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","HOSTNAME":"echoserver-5c96cdb7b5-tgbsn","NODE_VERSION":"20.11.0","YARN_VERSION":"1.22.19","PORT":"80","ECHOSERVER_PORT_80_TCP":"tcp://[fd5d:bc46:aeb0:8871::6d96]:80","ECHOSERVER_PORT_80_TCP_PROTO":"tcp","KUBERNETES_PORT":"tcp://10.0.0.1:443","KUBERNETES_PORT_443_TCP_PORT":"443","ECHOSERVER_PORT":"tcp://[fd5d:bc46:aeb0:8871::6d96]:80","ECHOSERVER_PORT_80_TCP_ADDR":"fd5d:bc46:aeb0:8871::6d96","KUBERNETES_SERVICE_PORT_HTTPS":"443","KUBERNETES_PORT_443_TCP":"tcp://10.0.0.1:443","KUBERNETES_PORT_443_TCP_PROTO":"tcp","KUBERNETES_SERVICE_PORT":"443","ECHOSERVER_SERVICE_PORT":"80","ECHOSERVER_PORT_80_TCP_PORT":"80","KUBERNETES_SERVICE_HOST":"10.0.0.1","KUBERNETES_PORT_443_TCP_ADDR":"10.0.0.1","ECHOSERVER_SERVICE_HOST":"fd5d:bc46:aeb0:8871::6d96","HOME":"/root"}}

Expose the workload via service type LoadBalancer (Optional)- Before AKS 1.27

Before AKS 1.27, only the first IP address for a service will be provided to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, create two services targeting the same selector, one for IPv4 and one for IPv6.

  • Create and deploy an NGINX deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:latest
        name: nginx
kubectl apply -f nginx.yaml
  • Expose the deployment with service of type, LoadBalancer.

Note- To provide a dual-stack service for a single deployment, create two services targeting the same selector, one for IPv4 and one for IPv6.

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ipv4
  name: nginx-ipv4
spec:
  externalTrafficPolicy: Cluster
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-ipv4
  type: LoadBalancer
---

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ipv6
  name: nginx-ipv6
spec:
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-ipv6
  type: LoadBalancer
---
kubectl apply -f service-type-ipv4.yaml

kubectl apply -f service-type-ipv6.yaml
  • Check that both services have been deployed successfully.
kubectl describe svc nginx-ipv4

Name:                     nginx-ipv4
Namespace:                default
Labels:                   app=nginx
Annotations:              <none>
Selector:                 app=nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.0.151.217
IPs:                      10.0.151.217
LoadBalancer Ingress:     4.247.22.64
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31177/TCP
Endpoints:                10.0.4.143:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

----------------------------------

kubectl describe svc nginx-ipv6

Name:                     nginx-ipv6
Namespace:                default
Labels:                   app=nginx
Annotations:              <none>
Selector:                 app=nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv6
IP:                       fd5d:bc46:aeb0:8871::dfc7
IPs:                      fd5d:bc46:aeb0:8871::dfc7
LoadBalancer Ingress:     2603:1040:c01::356
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32298/TCP
Endpoints:                [fd00::49d]:80
Session Affinity:         None
External Traffic Policy:  Cluster
HealthCheck NodePort:     31085
Events:                   <none>
  • Verify functionality via a command-line web request.
curl -I 4.247.22.64
HTTP/1.1 200 OK
Server: nginx/1.25.4
Date: Wed, 10 Apr 2024 13:01:10 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 14 Feb 2024 16:03:00 GMT
Connection: keep-alive
ETag: "65cce434-267"
Accept-Ranges: bytes

curl -g -6 'http://[2603:1040:c01::356]:80/' -I
HTTP/1.1 200 OK
Server: nginx/1.25.4
Date: Wed, 10 Apr 2024 13:01:05 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Wed, 14 Feb 2024 16:03:00 GMT
Connection: keep-alive
ETag: "65cce434-267"
Accept-Ranges: bytes

Expose the workload via service type LoadBalancer (Optional)- After AKS 1.27

Starting in AKS v1.27, you can create a dual-stack LoadBalancer service, which will provide 1 IPv4 public IP and 1 IPv6 public IP.

  • Create and deploy an NGINX deployment with a corresponding dual-stack service of type LoadBalancer.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.25.1
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  ipFamilyPolicy: PreferDualStack
  externalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  - IPv6
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
kubectl apply -f dualstack.yaml
  • Check that the service has been deployed successfully.
kubectl describe svc nginx
Name:                     nginx
Namespace:                default
Labels:                   app=nginx
Annotations:              <none>
Selector:                 app=nginx
Type:                     LoadBalancer
IP Family Policy:         PreferDualStack
IP Families:              IPv4,IPv6
IP:                       10.0.116.242
IPs:                      10.0.116.242,fdb0:56d5:c97f:8871::767a
LoadBalancer Ingress:     13.70.187.148, 2603:1010:200::1d1
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32517/TCP
Endpoints:                10.0.1.160:80,10.0.2.234:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
  • Verify functionality via a command-line web request for both IPv4 and IPv6.
curl -I 13.70.187.148
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 07 May 2024 10:29:58 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"
Accept-Ranges: bytes

curl -g -6 'http://[2603:1010:200::1d1]:80/' -I
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 07 May 2024 10:30:06 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"
Accept-Ranges: bytes

Monitor IPv6 traffic flows with Hubble UI.

Note- To obtain the helm values to install Hubble UI and access the Enterprise documentation, you need to reach out to sales@isovalent.com and support@isovalent.com

Hubble-UI (Enterprise documentation) is enabled via helm charts. Once the installation is complete, you will notice hubble-ui pods are up and running:

kubectl get pods -n hubble-ui
NAME                         READY   STATUS    RESTARTS   AGE
hubble-ui-6d964f9779-gfqr7   2/2     Running   0          20h

Validate the installation and verify the flows on Hubble

  • To access Hubble UI, forward a local port to the Hubble UI service:
kubectl port-forward -n hubble-ui svc/hubble-ui 12000:80
  • Then, open http://localhost:12000 in your browser.
  • You should be able to see all your flows. To narrow down the results, you can filter based on the pod’s name to only see the flows you are interested in.

Monitor IPv6 traffic flows with Hubble CLI

Hubble’s CLI extends the visibility that is provided by standard kubectl commands like kubectl get pods to give you more network-level details about a request, such as its status and the security identities associated with its source and destination.

The Hubble CLI can be leveraged to observe network flows from Cilium agents. Users can observe the flows from their local machine workstation for troubleshooting or monitoring.

kubectl port-forward -n kube-system svc/hubble-relay --address 0.0.0.0 4245:80

Setup Hubble Relay Forwarding

Use the kubectl port forward to hubble-relay, then edit the hubble config to point at the remote hubble server component.

Hubble Status

Hubble status can check the overall health of Hubble within your cluster. If using Hubble Relay, a counter for the number of connected nodes will appear in the last line of the output.

hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 12,285/12,285 (100.00%)
Flows/s: 22.37
Connected Nodes: 3/3

View the flows in Hubble CLI

  • Traffic from pod-worker to pod1-worker1
hubble observe --ipv6 --from-pod pod-worker

Apr 10 09:46:19.860: default/pod-worker (ID:26627) -> default/pod1-worker1 (ID:851) to-overlay FORWARDED (ICMPv6 EchoRequest)
Apr 10 09:46:19.861: default/pod-worker (ID:26627) -> default/pod1-worker1 (ID:851) to-endpoint FORWARDED (ICMPv6 EchoRequest)
  • Print the node where the pods are running with the --print-node-name
hubble observe --ipv6 --from-pod pod-worker --print-node-name

Apr 10 09:46:19.860 [aks-nodepool1-23195171-vmss000000]: default/pod-worker (ID:26627) -> default/pod1-worker1 (ID:851) to-overlay FORWARDED (ICMPv6 EchoRequest)
Apr 10 09:46:19.861 [aks-nodepool1-23195171-vmss000001]: default/pod-worker (ID:26627) -> default/pod1-worker1 (ID:851) to-endpoint FORWARDED (ICMPv6 EchoRequest)
Apr 10 09:46:25.984 [aks-nodepool1-23195171-vmss000000]: default/pod-worker (ID:26627) -> default/pod1-worker1 (ID:851) to-overlay FORWARDED (ICMPv6 EchoRequest)
Apr 10 09:46:25.984 [aks-nodepool1-23195171-vmss000001]: default/pod-worker (ID:26627) -> default/pod1-worker1 (ID:851) to-endpoint FORWARDED (ICMPv6 EchoRequest)
  • View HTTP and ICMPv6 Flows
hubble observe --ipv6 --from-pod pod-worker -o dict --ip-translation=false
  TIMESTAMP: Apr 10 10:15:36.417
     SOURCE: fd00::462
DESTINATION: fd00::54
       TYPE: to-endpoint
    VERDICT: FORWARDED
    SUMMARY: ICMPv6 EchoRequest
------------
  TIMESTAMP: Apr 10 10:17:51.552
     SOURCE: fd00::462
DESTINATION: [fd5d:bc46:aeb0:8871::6d96]:80
       TYPE: pre-xlate-fwd
    VERDICT: TRACED
    SUMMARY: TCP
------------
  TIMESTAMP: Apr 10 10:17:51.552
     SOURCE: fd00::462
DESTINATION: [fd00::15b]:80
       TYPE: post-xlate-fwd
    VERDICT: TRANSLATED
    SUMMARY: TCP
------------
  TIMESTAMP: Apr 10 10:17:51.552
     SOURCE: [fd00::462]:32850
DESTINATION: [fd00::15b]:80
       TYPE: to-overlay
    VERDICT: FORWARDED
    SUMMARY: TCP Flags: SYN
------------
  TIMESTAMP: Apr 10 10:17:51.553
     SOURCE: [fd00::462]:32850
DESTINATION: [fd00::15b]:80
       TYPE: to-endpoint
    VERDICT: FORWARDED
    SUMMARY: TCP Flags: SYN
------------
  TIMESTAMP: Apr 10 10:17:51.553
     SOURCE: [fd00::462]:32850
DESTINATION: [fd00::15b]:80
       TYPE: to-endpoint
    VERDICT: FORWARDED
    SUMMARY: TCP Flags: ACK

Conclusion

Hopefully, this post gave you a good overview of how to deploy and upgrade an IPv4/IPv6 Dual Stack AKS (Azure Kubernetes Service) cluster with Cilium as the CNI to benefit from its networking, observability, and security capabilities. If you’d like to learn more, you can schedule a demo with our experts.

Try it out

Start with the IPv6 lab and see how to enable Dual Stack in your enterprise environment.

Further Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Related

Blogs

Cilium in Azure Kubernetes Service (AKS)

In this tutorial, users will learn how to deploy Isovalent Enterprise for Cilium on your AKS cluster from Azure Marketplace on a new cluster and also upgrade an existing cluster from an AKS cluster running Azure CNI powered by Cilium to Isovalent Enterprise for Cilium.

By
Amit Gupta
Blogs

Enabling Enterprise features for Cilium in Azure Kubernetes Service (AKS)

In this tutorial, you will learn how to enable Enterprise features (Layer-3, 4 & 7 policies, DNS-based policies, and observe the Network Flows using Hubble-CLI) in an Azure Kubernetes Service (AKS) cluster running Isovalent Enterprise for Cilium.

By
Amit Gupta
Videos

AKS Bring Your Own CNI (BYOCNI) and Cilium

[03:09] In this short video, Senior Technical Marketing Engineer Nico Vibert deploys a AKS cluster without a CNI to ease the installation of Cilium.

By
Nico Vibert

Industry insights you won’t delete. Delivered to your inbox weekly.