Back to blog

Cilium, Azure Linux, and Azure Kubernetes Service come together.

Amit Gupta
Amit Gupta
Published: Updated: Cilium
Cilium, Azure Linux, and Azure Kubernetes Service come together.

Isovalent Enterprise for Cilium can now be installed on Azure Kubernetes Service clusters using Azure Linux as the host Operating system. In this tutorial, you will learn how to:

  • Install AKS clusters running Azure CNI powered by Cilium with Azure Linux.
  • Migrate your existing clusters on Azure CNI powered by Cilium from Ubuntu to Azure Linux.
  • Upgrade your clusters from Azure CNI powered by Cilium running Azure Linux to Isovalent Enterprise for Cilium.

What is Isovalent Enterprise for Cilium?

Azure Kubernetes Service (AKS) uses Cilium natively, wherein AKS combines the robust control plane of Azure CNI with Cilium’s data plane to provide high-performance networking and security. Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns. 

Why Isovalent Enterprise for Cilium?

For enterprise customers requiring support and usage of Advanced NetworkingSecurity, and Observability features, “Isovalent Enterprise for Cilium” is recommended with the following benefits:

  • Advanced network policy: Isovalent Cilium Enterprise provides advanced network policy capabilities, including DNS-aware policy, L7 policy, and deny policy, enabling fine-grained control over network traffic for micro-segmentation and improved security.
  • Hubble flow observability + User Interface: Isovalent Cilium Enterprise Hubble observability feature provides real-time network traffic flow, policy visualization, and a powerful User Interface for easy troubleshooting and network management.
  • Multi-cluster connectivity via Cluster Mesh: Isovalent Cilium Enterprise provides seamless networking and security across multiple clouds, including public cloud providers like AWS, Azure, and Google Cloud Platform, as well as on-premises environments.
  • Advanced Security Capabilities via Tetragon: Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats. Built on eBPF, Tetragon can easily scale to meet the needs of the most demanding cloud-native environments.
  • Service Mesh: Isovalent Cilium Enterprise provides seamless service-to-service communication that’s sidecar-free and advanced load balancing, making it easy to deploy and manage complex microservices architectures.
  • Enterprise-grade support: Isovalent Cilium Enterprise includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that any issues are resolved promptly and efficiently. Additionally, professional services help organizations deploy and manage Cilium in production environments.

How can you deploy Isovalent Enterprise for Cilium?

Isovalent Enterprise for Cilium is available in the Azure Marketplace. It can also be deployed using Azure Resource Manager (ARM) Templates and Azure CLI.

What is Azure Linux?

Microsoft announced the General Availability for Azure Linux Container Host in May 2023. Azure Linux is a lightweight operating system, containing only the packages needed for a cloud environment. Azure Linux can be customized through custom packages and tools, to fit the requirements of your application. Azure Kubernetes Services is one such application that uses production-grade container orchestration as an option for container hosting. The Azure Linux container host for AKS is an open-source Linux distribution created by Microsoft, and it’s available as a container host on Azure Kubernetes Service (AKS).

Why Azure Linux as the host OS?

A popular question you would ask is why choose Azure Linux as the host OS:

  • Optimized to run in Azure. Built, verified, and digitally signed by Microsoft .
  • Supply chain security.
  • Smaller and leaner Linux to reduce footprint, surface attack area, & optimize performance.
  • Operational consistency across Edge to Cloud.
  • Rigorous validation and testing of packages and images on AKS infrastructure.

Pre-Requisites

The following prerequisites must be considered before you proceed with this tutorial.

  • Azure CLI version 2.48.1 or later. Run az –version to see the currently installed version. If you need to install or upgrade, see Install Azure CLI.
  • If using ARM templates or the REST API, the AKS API version must be 2022–09–02-preview or later.
  • You should have an Azure Subscription.
  • Install kubectl.
  • Install Cilium CLI.
  • Install Helm.
  • Ensure you have enough quota resources to create an AKS cluster. Go to the Subscription blade, navigate to “Usage + Quotas,” and make sure you have enough quota for the following resources:
    -Regional vCPUs
    -Standard Dv4 Family vCPUs
  • You can choose regions where the quotas are available and not strictly follow the regions picked up during this tutorial.

Limitations with Azure Linux Container Host

  • Azure Linux cannot yet be deployed through the Azure Portal.
  • Azure Linux doesn’t support AppArmor. Support for SELinux can be manually configured.
  • Creating an AKS cluster on Isovalent Enterprise for Cilium with Azure Linux as the host OS will be available in a future release.

Installing Azure Linux on Azure Kubernetes Service Clusters

Following combinations of installing and migrating AKS clusters with Azure Linux are supported.

Network Plugin
Default Nodepool OS
(during AKS cluster creation)

Additional Nodepool OS
(after AKS cluster creation)
Migration from Ubuntu to Azure Linux
Azure CNI (Powered by Cilium)-Overlay ModeAzure LinuxAzure LinuxN.A
Azure CNI (Powered by Cilium)-Overlay ModeUbuntuAzure LinuxYes
Azure CNI (Powered by Cilium)-Dynamic IP Allocation ModeAzure LinuxAzure LinuxN.A
Azure CNI (Powered by Cilium)-Dynamic IP Allocation ModeUbuntuAzure LinuxYes
Azure CNI (Powered by Cilium)-Overlay Mode  to Isovalent Enterprise for CiliumAzure LinuxN.AN.A
Bring your own CNI (BYOCNI)Azure LinuxAzure LinuxN.A
Bring your own CNI (BYOCNI)UbuntuAzure LinuxYes
  • N.A= Not Applicable
  • BYOCNI (Azure Linux) and BYOCNI (Ubuntu) have also been tested and validated. If you would like to get more information about them, you can get in touch with sales@isovalent.com and support@isovalent.com

Choosing the IMU for a Product?- Installation, Migration or Upgrade

You can take a look at this flowchart and then decide whether you would like to do:

  • A greenfield installation of your AKS cluster with Azure Linux
  • Upgrade/Migrate your existing AKS clusters from Ubuntu to Azure Linux

Scenario 1: AKS cluster on Azure CNI powered by Cilium in (Overlay mode) with Azure Linux

AKS Resource Group Creation

Create a Resource Group

az group create --name azpcoverlayal --location westus2

AKS Cluster creation

Create a cluster with Azure CNI Powered by Cilium with network-plugin as Azure, network-plugin-mode as Overlay, os-sku as AzureLinux and network-dataplane as Cilium.

az aks create -n azpcoverlayal -g azpcoverlayal -l westus2 \
  --network-plugin azure \
  --network-plugin-mode overlay \
  --pod-cidr 192.168.0.0/16 \
  --network-dataplane cilium \
  --os-sku AzureLinux

Set the Subscription

Choose the subscription you want to use if you have multiple Azure subscriptions.

  • Replace SubscriptionName with your subscription name.
  • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName

Set the Kubernetes Context

Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.

az aks get-credentials --resource-group azpcoverlayal --name azpcoverlayal

Cluster Status Check

Check the status of the nodes and make sure they are in a ‘Ready’ state and are running ‘CBL-Mariner/Linux’ as the host OS.

kubectl get nodes -o wide
NAME                                STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
aks-nodepool1-34896908-vmss000000   Ready    agent   47m   v1.26.6   10.224.0.4    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-34896908-vmss000001   Ready    agent   47m   v1.26.6   10.224.0.6    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-34896908-vmss000002   Ready    agent   47m   v1.26.6   10.224.0.5    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22

Add nodepool with OS-type as AzureLinux.

Add an Azure Linux node pool to your existing cluster.

Note- When adding a new Azure Linux node pool, you need to add at least one as --mode System . Otherwise, AKS will not allow you to delete your existing node pool.

az aks nodepool add --resource-group azpcoverlayal --cluster-name azpcoverlayal --name alnodepool --node-count 2 --os-sku AzureLinux --mode System

Cluster Status Check

Check the status of the newly added nodes and make sure they are in a ‘Ready’ state and are running ‘CBL-Mariner/Linux’ as the host OS.

kubectl get nodes -o wide
NAME                                 STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
aks-alnodepool-17576648-vmss000000   Ready    agent   51s   v1.26.6   10.224.0.8    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-alnodepool-17576648-vmss000001   Ready    agent   62s   v1.26.6   10.224.0.7    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-34896908-vmss000000    Ready    agent   51m   v1.26.6   10.224.0.4    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-34896908-vmss000001    Ready    agent   51m   v1.26.6   10.224.0.6    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-34896908-vmss000002    Ready    agent   51m   v1.26.6   10.224.0.5    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22

Scenario 2: AKS cluster on Azure CNI powered by Cilium (Overlay Mode) with Ubuntu (Migration to Azure Linux)

AKS Resource Group Creation

Create a Resource Group

az group create --name azpcoverlay --location francecentral

AKS Cluster creation

Create a cluster with Azure CNI Powered by Cilium with network-plugin as Azure, network-plugin-mode as Overlay, and network-dataplane as Cilium.

az aks create -n azpcoverlay -g azpcoverlay -l francecentral \
  --network-plugin azure \
  --network-plugin-mode overlay \
  --pod-cidr 192.168.0.0/16 \
  --network-dataplane cilium

Set the Subscription

Choose the subscription you want to use if you have multiple Azure subscriptions.

  • Replace SubscriptionName with your subscription name.
  • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName

Set the Kubernetes Context

Log in to the Azure portal, browse to Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.

az aks get-credentials --resource-group azpcoverlayal --name azpcoverlay

Cluster Status Check

Check the status of the nodes and make sure they are in a ‘Ready’ state and are running ‘Ubuntu’ as the host OS.

kubectl get nodes -o wide
NAME                                STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-20464456-vmss000000   Ready    agent   153m   v1.26.6   10.224.0.5    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1
aks-nodepool1-20464456-vmss000001   Ready    agent   153m   v1.26.6   10.224.0.4    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1
aks-nodepool1-20464456-vmss000002   Ready    agent   153m   v1.26.6   10.224.0.6    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1

Add nodepool with OS-type as AzureLinux

Add an Azure Linux node pool to your existing cluster.

Note- When adding a new Azure Linux node pool, you need to add at least one as --mode System . Otherwise, AKS will not allow you to delete your existing node pool.

az aks nodepool add --resource-group azpcoverlay --cluster-name azpcoverlay --name alnodepool --node-count 2 --os-sku AzureLinux --mode System

Cluster Status Check

Check the status of the newly added nodes and make sure they are in a ‘Ready’ state and are running ‘CBL-Mariner/Linux’ as the host OS.

kubectl get nodes -o wide
NAME                                 STATUS   ROLES   AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-alnodepool-38981809-vmss000000   Ready    agent   2m10s   v1.26.6   10.224.0.8    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2    containerd://1.6.22
aks-alnodepool-38981809-vmss000001   Ready    agent   2m16s   v1.26.6   10.224.0.7    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2    containerd://1.6.22
aks-nodepool1-20464456-vmss000000    Ready    agent   158m    v1.26.6   10.224.0.5    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1
aks-nodepool1-20464456-vmss000001    Ready    agent   158m    v1.26.6   10.224.0.4    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1
aks-nodepool1-20464456-vmss000002    Ready    agent   158m    v1.26.6   10.224.0.6    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1

Migrate the default nodes to Azure Linux.

You can migrate the default nodes created while creating the AKS cluster and run Ubuntu as the host OS. This is optional; you can skip it if it is not required. Migration is a 3-part process:

Cordon the existing Nodes (Default)

Cordoning marks specified nodes as unschedulable and prevents any more pods from being added to the nodes.

First, obtain the names of the nodes you’d like to cordon with kubectl get nodes:

kubectl get nodes -o wide
NAME                                 STATUS   ROLES   AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-alnodepool-38981809-vmss000000   Ready    agent   2m10s   v1.26.6   10.224.0.8    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2    containerd://1.6.22
aks-alnodepool-38981809-vmss000001   Ready    agent   2m16s   v1.26.6   10.224.0.7    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2    containerd://1.6.22
aks-nodepool1-20464456-vmss000000    Ready    agent   158m    v1.26.6   10.224.0.5    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1
aks-nodepool1-20464456-vmss000001    Ready    agent   158m    v1.26.6   10.224.0.4    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1
aks-nodepool1-20464456-vmss000002    Ready    agent   158m    v1.26.6   10.224.0.6    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1

Next, using kubectl cordon <node-names>, specify the desired nodes in a space-separated list:

kubectl cordon aks-nodepool1-20464456-vmss000000 aks-nodepool1-20464456-vmss000001 aks-nodepool1-20464456-vmss000002

Check the status of the nodes that are being cordoned:

kubectl get nodes -o wide
NAME                                 STATUS                     ROLES   AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-alnodepool-38981809-vmss000000   Ready                      agent   55m     v1.26.6   10.224.0.8    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2    containerd://1.6.22
aks-alnodepool-38981809-vmss000001   Ready                      agent   56m     v1.26.6   10.224.0.7    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2    containerd://1.6.22
aks-nodepool1-20464456-vmss000000    Ready,SchedulingDisabled   agent   3h32m   v1.26.6   10.224.0.5    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1
aks-nodepool1-20464456-vmss000001    Ready,SchedulingDisabled   agent   3h32m   v1.26.6   10.224.0.4    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1
aks-nodepool1-20464456-vmss000002    Ready,SchedulingDisabled   agent   3h32m   v1.26.6   10.224.0.6    <none>        Ubuntu 22.04.3 LTS   5.15.0-1049-azure   containerd://1.7.5-1

Drain the existing nodes (Default)

To successfully drain nodes and evict running pods, ensure that any PodDisruptionBudgets (PDBs) allow for at least 1 pod replica to be moved at a time; otherwise, the drain/evict operation will fail. To check this, you can run kubectl get pdb -A, and make sure ALLOWED DISRUPTIONS is at least 1 or higher.

kubectl get pdb -A
NAMESPACE     NAME                 MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
kube-system   coredns-pdb          1               N/A               1                     68m
kube-system   konnectivity-agent   1               N/A               1                     68m
kube-system   metrics-server-pdb   1               N/A               1                     68m

Draining nodes will cause pods running on them to be evicted and recreated on the other schedulable nodes.

To drain nodes, use kubectl drain <node-names> --ignore-daemonsets --delete-emptydir-data, again using, a space-separated list of node names:

Note- Using --delete-emptydir-data is required to evict the AKS-created coredns and metrics-server pods. If this flag isn’t used, an error is expected.

kubectl drain aks-nodepool1-20464456-vmss000000 aks-nodepool1-20464456-vmss000001 aks-nodepool1-20464456-vmss000002 --ignore-daemonsets --delete-emptydir-data

Remove the existing nodes (Default)

To remove the existing nodes, use the az aks delete command. The final result is the AKS cluster having a single Azure Linux node pool with the desired SKU size and all the applications and pods properly running.

az aks nodepool delete \
    --resource-group azpcoverlay \
    --cluster-name azpcoverlay \
    --name nodepool1

Check the status of the nodes to ensure that the default node has been deleted and the additional node running AzureLinux is in a ‘Ready’ state:

kubectl get nodes -o wide
NAME                                 STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
aks-alnodepool-38981809-vmss000000   Ready    agent   60m   v1.26.6   10.224.0.8    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-alnodepool-38981809-vmss000001   Ready    agent   60m   v1.26.6   10.224.0.7    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22

Scenario 3: AKS cluster on Azure CNI powered by Cilium (Dynamic IP mode) with Azure Linux

AKS Resource Group Creation

Create a Resource Group

az group create --name azpcvnet --location canadacentral

AKS Network creation

Create a virtual network with a subnet for nodes and a subnet for pods and retrieve the subnetID

az network vnet create -g azpcvnet --location canadacentral --name azpcvnet --address-prefixes 10.0.0.0/8 -o none

az network vnet subnet create -g azpcvnet --vnet-name azpcvnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none

az network vnet subnet create -g azpcvnet --vnet-name azpcvnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none

AKS Cluster creation

Create an AKS cluster referencing the node subnet using –vnet-subnet-id and the pod subnet using –pod-subnet-id. Make sure to use the argument –network-plugin as azure, os-sku as AzureLinux and network-dataplane as cilium.

az aks create -n azpcvnet -g azpcvnet -l canadacentral \
  --max-pods 250 \
  --network-plugin azure \
  --vnet-subnet-id /subscriptions/<subscription-id>/resourceGroups/azpcvnet/providers/Microsoft.Network/virtualNetworks/azpcvnet/subnets/nodesubnet \
  --pod-subnet-id /subscriptions/<subscription-id>/resourceGroups/azpcvnet/providers/Microsoft.Network/virtualNetworks/azpcvnet/subnets/podsubnet \
  --network-dataplane cilium \
  --os-sku AzureLinux

Set the Subscription

Choose the subscription you want to use if you have multiple Azure subscriptions.

  • Replace SubscriptionName with your subscription name.
  • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName

Set the Kubernetes Context

Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.

az aks get-credentials --resource-group azpcvnet --name azpcvnet

Cluster Status Check

Check the status of the nodes and make sure they are in a ‘Ready’ state and are running ‘CBL-Mariner/Linux’ as the host OS.

kubectl get nodes -o wide
NAME                                STATUS   ROLES   AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
aks-nodepool1-35610968-vmss000000   Ready    agent   56m     v1.26.6   10.240.0.5    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-35610968-vmss000001   Ready    agent   55m     v1.26.6   10.240.0.4    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-35610968-vmss000002   Ready    agent   56m     v1.26.6   10.240.0.6    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22

Add nodepool with OS-type as AzureLinux.

Add an Azure Linux node pool to your existing cluster. In the case of Azure CNI (Dynamic IP allocation), you need to add a new subnet for pods and nodes in addition to what was created originally at the time of the AKS cluster creation.

Note- When adding a new Azure Linux node pool, you need to add at least one as --mode System . Otherwise, AKS will not allow you to delete your existing node pool.

az network vnet subnet create --resource-group azpcvnet  --vnet-name azpcvnet  --name node2subnet --address-prefixes 10.242.0.0/16 -o none

az network vnet subnet create --resource-group azpcvnet  --vnet-name azpcvnet --name pod2subnet --address-prefixes 10.243.0.0/16 -o none

az aks nodepool add --cluster-name azpcvnet --resource-group azpcvnet --name azpclinux --max-pods 250 --node-count 2 --vnet-subnet-id /subscriptions/<subscription-id>/resourceGroups/azpcvnet/providers/Microsoft.Network/virtualNetworks/azpcvnet/subnets/node2subnet  --pod-subnet-id /subscriptions/<subscription-id>/resourceGroups/azpcvnet/providers/Microsoft.Network/virtualNetworks/azpcvnet/subnets/pod2subnet  --os-sku AzureLinux --mode System 

Cluster Status Check

Check the status of the newly added nodes and make sure they are in a ‘Ready’ state and are running ‘CBL-Mariner/Linux’ as the host OS.

kubectl get nodes -o wide
NAME                                STATUS   ROLES   AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
aks-azpclinux-27886613-vmss000000   Ready    agent   3m57s   v1.26.6   10.242.0.5    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-azpclinux-27886613-vmss000001   Ready    agent   4m5s    v1.26.6   10.242.0.4    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-35610968-vmss000000   Ready    agent   56m     v1.26.6   10.240.0.5    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-35610968-vmss000001   Ready    agent   55m     v1.26.6   10.240.0.4    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-35610968-vmss000002   Ready    agent   56m     v1.26.6   10.240.0.6    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22

Scenario 4: AKS cluster on Azure CNI powered by Cilium (Dynamic IP mode) with Ubuntu (Migration to Azure Linux)

AKS Resource Group Creation

Create a Resource Group

az group create --name azpcvnet1 --location australiacentral

AKS Network creation

Create a virtual network with a subnet for nodes and a subnet for pods and retrieve the subnetID

az network vnet create -g azpcvnet1 --location australiacentral --name azpcvnet1 --address-prefixes 10.0.0.0/8 -o none

az network vnet subnet create -g azpcvnet1 --vnet-name azpcvnet1 --name nodesubnet --address-prefixes 10.240.0.0/16 -o none

az network vnet subnet create -g azpcvnet1 --vnet-name azpcvnet1 --name podsubnet --address-prefixes 10.241.0.0/16 -o none

AKS Cluster creation

Create an AKS cluster referencing the node subnet using –vnet-subnet-id and the pod subnet using –pod-subnet-id. Make sure to use the argument –network-plugin as azure and network-dataplane as cilium.

az aks create -n azpcvnet1 -g azpcvnet1 -l australiacentral \
  --max-pods 250 \
  --network-plugin azure \
  --vnet-subnet-id /subscriptions/<subscription-id>/resourceGroups/azpcvnet1/providers/Microsoft.Network/virtualNetworks/azpcvnet1/subnets/nodesubnet \
  --pod-subnet-id /subscriptions/<subscription-id>/resourceGroups/azpcvnet1/providers/Microsoft.Network/virtualNetworks/azpcvnet1/subnets/podsubnet \
  --network-dataplane cilium

Set the Subscription

Choose the subscription you want to use if you have multiple Azure subscriptions.

  • Replace SubscriptionName with your subscription name.
  • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName

Set the Kubernetes Context

Log in to the Azure portal, browse to Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.

az aks get-credentials --resource-group azpcvnet1 --name azpcvnet1

Cluster Status Check

Check the status of the nodes and make sure they are in a ‘Ready’ state and are running ‘Ubuntu’ as the host OS.

kubectl get nodes -o wide
NAME                                STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-nodepool1-23335260-vmss000000   Ready    agent   21m   v1.26.6   10.240.0.5    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1
aks-nodepool1-23335260-vmss000001   Ready    agent   21m   v1.26.6   10.240.0.4    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1
aks-nodepool1-23335260-vmss000002   Ready    agent   21m   v1.26.6   10.240.0.6    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1

Add nodepool with OS-type as AzureLinux.

Add an Azure Linux node pool to your existing cluster. In the case of Azure CNI (Dynamic IP allocation), you need to add a new subnet for pods and nodes in addition to what was created originally at the time of the AKS cluster creation.

Note- When adding a new Azure Linux node pool, you need to add at least one as --mode System . Otherwise, AKS will not allow you to delete your existing node pool.

az network vnet subnet create --resource-group azpcvnet1  --vnet-name azpcvnet1  --name node2subnet --address-prefixes 10.242.0.0/16 -o none

az network vnet subnet create --resource-group azpcvnet1  --vnet-name azpcvnet1 --name pod2subnet --address-prefixes 10.243.0.0/16 -o none

az aks nodepool add --cluster-name azpcvnet1 --resource-group azpcvnet1 --name azpclinux1 --max-pods 250 --node-count 2 --vnet-subnet-id /subscriptions/<subscription-id>/resourceGroups/azpcvnet1/providers/Microsoft.Network/virtualNetworks/azpcvnet1/subnets/node2subnet  --pod-subnet-id /subscriptions/<subscription-id>/resourceGroups/azpcvnet1/providers/Microsoft.Network/virtualNetworks/azpcvnet1/subnets/pod2subnet  --os-sku AzureLinux --mode System

Cluster Status Check

Check the status of the newly added nodes and make sure they are in a ‘Ready’ state and are running ‘CBL-Mariner/Linux’ as the host OS.

kubectl get nodes -o wide
NAME                                 STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-azpclinux1-23905787-vmss000000   Ready    agent   10m   v1.26.6   10.242.0.5    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2   containerd://1.6.22
aks-azpclinux1-23905787-vmss000001   Ready    agent   10m   v1.26.6   10.242.0.4    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-23335260-vmss000000    Ready    agent   39m   v1.26.6   10.240.0.5    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1
aks-nodepool1-23335260-vmss000001    Ready    agent   39m   v1.26.6   10.240.0.4    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1
aks-nodepool1-23335260-vmss000002    Ready    agent   39m   v1.26.6   10.240.0.6    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1

Migrate the default nodes to Azure Linux.

You can migrate the default nodes created while creating the AKS cluster and run Ubuntu as the host OS. This is optional; if not required, you can skip this step. Migration is a 3-part process:

Cordon the existing Nodes (Default)

Cordoning marks specified nodes as unschedulable and prevents any more pods from being added to the nodes.

First, obtain the names of the nodes you’d like to cordon with kubectl get nodes:

kubectl get nodes -o wide
NAME                                 STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-azpclinux1-23905787-vmss000000   Ready    agent   32m   v1.26.6   10.242.0.5    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2   containerd://1.6.22
aks-azpclinux1-23905787-vmss000001   Ready    agent   32m   v1.26.6   10.242.0.4    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-23335260-vmss000000    Ready    agent   61m   v1.26.6   10.240.0.5    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1
aks-nodepool1-23335260-vmss000001    Ready    agent   61m   v1.26.6   10.240.0.4    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1
aks-nodepool1-23335260-vmss000002    Ready    agent   61m   v1.26.6   10.240.0.6    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1

Next, using kubectl cordon <node-names>, specify the desired nodes in a space-separated list:

kubectl cordon aks-nodepool1-23335260-vmss000000 aks-nodepool1-23335260-vmss000001 aks-nodepool1-23335260-vmss000002

Check the status of the nodes that are being cordoned:

kubectl get nodes -o wide
NAME                                 STATUS                     ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-azpclinux1-23905787-vmss000000   Ready                      agent   45m   v1.26.6   10.242.0.5    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2   containerd://1.6.22
aks-azpclinux1-23905787-vmss000001   Ready                      agent   45m   v1.26.6   10.242.0.4    <none>        CBL-Mariner/Linux    5.15.131.1-2.cm2   containerd://1.6.22
aks-nodepool1-23335260-vmss000000    Ready,SchedulingDisabled   agent   74m   v1.26.6   10.240.0.5    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1
aks-nodepool1-23335260-vmss000001    Ready,SchedulingDisabled   agent   74m   v1.26.6   10.240.0.4    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1
aks-nodepool1-23335260-vmss000002    Ready,SchedulingDisabled   agent   74m   v1.26.6   10.240.0.6    <none>        Ubuntu 22.04.3 LTS   6.2.0-1014-azure   containerd://1.7.5-1

Drain the existing nodes (Default)

To successfully drain nodes and evict running pods, ensure that any PodDisruptionBudgets (PDBs) allow for at least 1 pod replica to be moved at a time; otherwise, the drain/evict operation will fail. To check this, you can run kubectl get pdb -A, and make sure ALLOWED DISRUPTIONS is at least 1 or higher.

kubectl get pdb -A
NAMESPACE     NAME                 MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
kube-system   coredns-pdb          1               N/A               1                     68m
kube-system   konnectivity-agent   1               N/A               1                     68m
kube-system   metrics-server-pdb   1               N/A               1                     68m

Draining nodes will cause pods running on them to be evicted and recreated on the other schedulable nodes.

To drain nodes, use kubectl drain <node-names> --ignore-daemonsets --delete-emptydir-data, again, a space-separated list of node names:

Note- Using --delete-emptydir-data is required to evict the AKS-created coredns and metrics-server pods. If this flag isn’t used, an error is expected.

kubectl drain aks-nodepool1-23335260-vmss000000 aks-nodepool1-23335260-vmss000001 aks-nodepool1-23335260-vmss000002 --ignore-daemonsets --delete-emptydir-data

Remove the existing nodes (Default)

To remove the existing nodes, use the az aks delete command. The final result is the AKS cluster having a single Azure Linux node pool with the desired SKU size and all the applications and pods properly running.

az aks nodepool delete \
    --resource-group azpcvnet1 \
    --cluster-name azpcvnet1 \
    --name nodepool1

Check the status of the nodes to ensure that the default node has been deleted and the additional node running AzureLinux is in a ‘Ready’ state:

kubectl get nodes -o wide
NAME                                 STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
aks-azpclinux1-23905787-vmss000000   Ready    agent   48m   v1.26.6   10.242.0.5    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22
aks-azpclinux1-23905787-vmss000001   Ready    agent   48m   v1.26.6   10.242.0.4    <none>        CBL-Mariner/Linux   5.15.131.1-2.cm2   containerd://1.6.22

Scenario 5: In-place OS SKU migration (preview)

You can now migrate your existing Ubuntu node pools to Azure Linux by changing the OS SKU of the node pool, which rolls the cluster through the standard node image upgrade process. This new feature doesn’t require the creation of new node pools.

Install the aks-preview extension.

Install the aks-preview extension using the az extension add command.

az extension add --name aks-preview

Register the OSSKUMigrationPreview feature flag

  • Register the OSSKUMigrationPreview feature flag on your subscription using the az feature register command.
az feature register --namespace Microsoft.ContainerService --name OSSKUMigrationPreview
  • Check the registration status using the az feature list command.
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/OSSKUMigrationPreview')].{Name:name,State:properties.state}"
Name                                                                                     State
------------------------------------------------             ----------
Microsoft.ContainerService/OSSKUMigrationPreview      Registered
  • Refresh the registration of the OSSKUMigrationPreview feature flag using the az provider register command.
az provider register --namespace Microsoft.ContainerService

Set the Subscription

Choose the subscription you want to use if you have multiple Azure subscriptions.

  • Replace SubscriptionName with your subscription name.
  • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName

Set the Kubernetes Context

Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.

az aks get-credentials --resource-group azpcvnet1 --name azpcvnet1

Note- You can use an existing scenario to see how OS SKU migration occurs on an existing AKS cluster running Ubuntu as the host OS.

Migrate the OS SKU of your Ubuntu node pool.

  • Migrate the OS SKU of your node pool to Azure Linux using the az aks nodepool update command. This command updates the OS SKU for your node pool from Ubuntu to Azure Linux. The OS SKU change triggers an immediate upgrade operation, which takes several minutes to complete.
az aks nodepool update --resource-group azpcvnet1 --cluster-name azpcvnet1 --name azpcvnet1 --os-sku AzureLinux

Truncated O/P:

"nodeImageVersion": "AKSAzureLinux-V2gen2-202402.26.0",
  "nodeInitializationTaints": null,
  "nodeLabels": null,
  "nodePublicIpPrefixId": null,
  "nodeTaints": null,
  "orchestratorVersion": "1.29",
  "osDiskSizeGb": 128,
  "osDiskType": "Managed",
  "osSku": "AzureLinux",
  "osType": "Linux",
  "podIpAllocationMode": null,
  "podSubnetId": null,
  "powerState": {
    "code": "Running"
  },

Verify the OS SKU migration

Once the migration is complete on your test clusters, you should verify the following to ensure a successful migration:

  • If your migration target is Azure Linux, run the kubectl get nodes -o wide command. The output should show CBL-Mariner/Linux as your OS image and .cm2 at the end of your kernel version.
  • Run the kubectl get pods -o wide -A command to verify that your pods and daemonsets are running on the new node pool.
  • Run the kubectl get nodes --show-labels command to verify that all of the node labels in your upgraded node pool are what you expect.
kubectl get nodes -o wide

NAME                                STATUS   ROLES    AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
aks-nodepool1-32804940-vmss000000   Ready    <none>   164m   v1.29.0   10.224.0.6    <none>        CBL-Mariner/Linux   5.15.148.2-2.cm2   containerd://1.6.26
aks-nodepool1-32804940-vmss000001   Ready    <none>   162m   v1.29.0   10.224.0.5    <none>        CBL-Mariner/Linux   5.15.148.2-2.cm2   containerd://1.6.26
aks-nodepool1-32804940-vmss000002   Ready    <none>   160m   v1.29.0   10.224.0.4    <none>        CBL-Mariner/Linux   5.15.148.2-2.cm2   containerd://1.6.26

kubectl get pods -o wide -A

NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE    IP           NODE                                NOMINATED NODE   READINESS GATES
default       myapp-blue-64b65f55d5-w5xr6           1/1     Running   0          162m   10.0.4.81    aks-nodepool1-32804940-vmss000001   <none>           <none>
default       myapp-green-545dbf78db-qqsxj          1/1     Running   0          164m   10.0.3.12    aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   cilium-5d4dm                          1/1     Running   0          164m   10.224.0.6   aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   cilium-kgndd                          1/1     Running   0          163m   10.224.0.5   aks-nodepool1-32804940-vmss000001   <none>           <none>
kube-system   cilium-node-init-dstt2                1/1     Running   0          163m   10.224.0.5   aks-nodepool1-32804940-vmss000001   <none>           <none>
kube-system   cilium-node-init-g9jx9                1/1     Running   0          161m   10.224.0.4   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   cilium-node-init-rschj                1/1     Running   0          164m   10.224.0.6   aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   cilium-operator-6f57667c66-sf4bf      1/1     Running   0          162m   10.224.0.5   aks-nodepool1-32804940-vmss000001   <none>           <none>
kube-system   cilium-q5t6t                          1/1     Running   0          161m   10.224.0.4   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   cloud-node-manager-6g4bt              1/1     Running   0          163m   10.224.0.5   aks-nodepool1-32804940-vmss000001   <none>           <none>
kube-system   cloud-node-manager-7f4w2              1/1     Running   0          164m   10.224.0.6   aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   cloud-node-manager-zwnn6              1/1     Running   0          161m   10.224.0.4   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   coredns-556b984759-q5fpm              1/1     Running   0          162m   10.0.3.77    aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   coredns-556b984759-s82w2              1/1     Running   0          160m   10.0.0.149   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   coredns-autoscaler-7c88465478-7l29f   1/1     Running   0          160m   10.0.0.171   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   csi-azuredisk-node-gsbhf              3/3     Running   0          40m    10.224.0.6   aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   csi-azuredisk-node-x7nxh              3/3     Running   0          40m    10.224.0.5   aks-nodepool1-32804940-vmss000001   <none>           <none>
kube-system   csi-azuredisk-node-zpzch              3/3     Running   0          40m    10.224.0.4   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   csi-azurefile-node-25646              3/3     Running   0          40m    10.224.0.4   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   csi-azurefile-node-5tzgp              3/3     Running   0          40m    10.224.0.5   aks-nodepool1-32804940-vmss000001   <none>           <none>
kube-system   csi-azurefile-node-frqwl              3/3     Running   0          40m    10.224.0.6   aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   hubble-relay-676964884b-hmjkh         1/1     Running   0          162m   10.0.3.177   aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   hubble-ui-5b87fb7b67-lmpc6            2/2     Running   0          162m   10.0.4.118   aks-nodepool1-32804940-vmss000001   <none>           <none>
kube-system   konnectivity-agent-66f58f489-8qwbf    1/1     Running   0          164m   10.224.0.6   aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   konnectivity-agent-66f58f489-lx9q4    1/1     Running   0          160m   10.224.0.4   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   kube-proxy-d79lq                      1/1     Running   0          163m   10.224.0.5   aks-nodepool1-32804940-vmss000001   <none>           <none>
kube-system   kube-proxy-hp7fk                      1/1     Running   0          164m   10.224.0.6   aks-nodepool1-32804940-vmss000000   <none>           <none>
kube-system   kube-proxy-whp5l                      1/1     Running   0          161m   10.224.0.4   aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   metrics-server-6bb9c967d6-6f582       2/2     Running   0          40m    10.0.0.82    aks-nodepool1-32804940-vmss000002   <none>           <none>
kube-system   metrics-server-6bb9c967d6-pjhp2       2/2     Running   0          40m    10.0.4.30    aks-nodepool1-32804940-vmss000001   <none>           <none>

Blue-Green tests during upgrade (Optional)

Blue-green deployments allow you to run multiple versions of your application side by side and do a clean cutover from v1 to v2. You would bring v2 of your application live, next to production, and have the ability to test it out without impacting production. Once you’re happy with v2, you will switch production from blue to green.

The easiest way to set up blue-green deployments is by using the Service object in Kubernetes. In the example below, you can deploy a blue and green service and observe how the traffic is switched from green to blue while the upgrade continues. In between, the traffic is also manually switched back to green to see if there are no traffic drops.

Deployment

  • Create a deployment myapp-blue
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-blue
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: blue
  template:
    metadata:
      labels:
        app: myapp
        version: blue
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
  • Create a deployment myapp-green
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-green
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: green
  template:
    metadata:
      labels:
        app: myapp
        version: green
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
  • Deploy the deployments myapp-green & myapp-blue
kubectl apply -f blue-deployment.yaml

kubectl apply -f green-deployment.yaml
  • Check the status of the deployments.
kubectl get deployments
NAME          READY   UP-TO-DATE   AVAILABLE   AGE
myapp-blue    1/1     1            1           169m
myapp-green   1/1     1            1           168m
  • Creation of pods- the deployments will lead to creation of two distinct pods with prefixes myapp-blue-* and myapp-green-*. They should be up and running.
kubectl get pods -o wide

NAME                           READY   STATUS    RESTARTS   AGE    IP          NODE                                NOMINATED NODE   READINESS GATES
myapp-blue-64b65f55d5-w5xr6    1/1     Running   0          144m   10.0.4.81   aks-nodepool1-32804940-vmss000001   <none>           <none>
myapp-green-545dbf78db-qqsxj   1/1     Running   0          146m   10.0.3.12   aks-nodepool1-32804940-vmss000000   <none>           <none>
  • Create a service object (svc.yaml) and apply the service object.
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
    version: blue
  ports:
  - name: http
    port: 80
    targetPort: 80
  type: LoadBalancer
kubectl apply -f svc.yaml
  • Ensure that the service has a Public IP (Optional).
kubectl get svc

NAME         TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)        AGE
kubernetes   ClusterIP      10.0.0.1     <none>          443/TCP        23h
myapp        LoadBalancer   10.0.65.57   20.235.182.29   80:30383/TCP   174m

Traffic generation and upgrade

  • Before the upgrade, you can send traffic toward the public IP of the myapp Service (Load Balancer), which has previously created pods as its backend.
  • Initiate the upgrade on the nodepool nodepool-name for the cluster. (nodepool1 in this case)
  • Notice that the green app is running on node 1.
kubectl get pods -o wide

NAME                           READY   STATUS    RESTARTS   AGE     IP          NODE                                NOMINATED NODE   READINESS GATES
myapp-blue-64b65f55d5-w5xr6    1/1     Running   0          4h54m   10.0.4.81   aks-nodepool1-32804940-vmss000002   <none>           <none>
myapp-green-545dbf78db-qqsxj   1/1     Running   0          4h56m   10.0.3.12   aks-nodepool1-32804940-vmss000001   <none>           <none>
  • Edit the service object and change the selector type to blue. Notice that the blue app is running on node 2
  • Notice that nodes are upgraded from Ubuntu to Azure Linux without traffic disruption.
  • As the upgrade proceeds, the apps are moved to nodes node0 and node1, respectively, but the user experience is not compromised.

Scenario 6: AKS cluster on Isovalent Enterprise for Cilium with Azure Linux

Note- You can upgrade your existing clusters as described in Scenarios 1 to 4 to Isovalent Enterprise for Cilium through Azure Marketplace. We have chosen one of those options to highlight the upgrade process. The steps for upgrading all the 4 scenarios are the same.

You can follow this blog and the steps to upgrade an existing AKS cluster to Isovalent Enterprise for Cilium. Make sure you take care of the prerequisites.

  • In the Azure portal, search for Marketplace on the top search bar. In the results, under Services, select Marketplace.

  • Type ‘Isovalent’ In the search window and select the offer.

  • On the Plans + Pricing tab, select an option. Ensure that the terms are acceptable, and then select Create.
  • Select the resource group in which the cluster exists that we will be upgraded.
  • Click Create New Dev Cluster, select ‘No,’ and click Next: Cluster Details.

  • As ‘No’ was selected, this will upgrade an already existing cluster in that region.
  • The name for the AKS cluster will be auto-populated by clicking on the drop-down selection.
  • Click ‘Next: Review + Create’ Details.

  • Once Final validation is complete, click ‘Create.’

  • When the application is deployed, the portal will show ‘Your deployment is complete,’ along with deployment details.

  • Verify that the nodes are running Azure Linux. Click > Resource Groups> Kubernetes Services> Select the AKS cluster> Nodepools

How do you upgrade Azure Linux Container Host Nodes?

The Azure Linux Container Host ships updates through Updated Azure Linux node images.

Note- Ensure you have an AKS cluster running Azure Linux or migrated to Azure Linux by following the steps outlined in the previous sections.

Manually upgrade your cluster.

To manually upgrade the node-image on a cluster:

az aks nodepool upgrade \
    --resource-group azcnial \
    --cluster-name azcnial \
    --name alnodepool \
    --node-image-only

Validation- Isovalent Enterprise for Cilium

Validate the version of Isovalent Enterprise for Cilium

Check the version of Isovalent Enterprise for Cilium with cilium version:

kubectl -n kube-system exec ds/cilium -- cilium version
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Client: 1.13.4-cee.1 22f99e91 2023-06-15T03:11:44+00:00 go version go1.19.10 linux/amd64
Daemon: 1.13.4-cee.1 22f99e91 2023-06-15T03:11:44+00:00 go version go1.19.10 linux/amd64

Cilium Health Check

cilium-health is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. You can check node-to-node health with cilium-health status:

kubectl -n kube-system exec ds/cilium -- cilium-health status
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2023-11-02T13:13:21Z
Nodes:
  aks-al nodepool-38475519-vm 000001 (localhost):
    Host connectivity to 10.240.0.120:
      ICMP to stack:   OK, RTT=342.3µs
      HTTP to agent:   OK, RTT=321.801µs
    Endpoint connectivity to 10.0.3.230:
      ICMP to stack:   OK, RTT=346.601µs
      HTTP to agent:   OK, RTT=995.002µs
  aks-alnodepool-38475519-vmss000000:
    Host connectivity to 10.240.0.91:
      ICMP to stack:   OK, RTT=2.584705ms
      HTTP to agent:   OK, RTT=1.347503ms
    Endpoint connectivity to 10.0.4.49:
      ICMP to stack:   OK, RTT=633.301µs
      HTTP to agent:   OK, RTT=4.180909ms

Cilium Connectivity Test

The Cilium connectivity test deploys a series of services and deployments, and CiliumNetworkPolicy will use various connectivity paths to connect. Connectivity paths include with and without service load-balancing and various network policy combinations.

The cilium connectivity test was run for all of the above scenarios, and the tests were passed successfully. A truncated output for one such test result is added.

cilium connectivity test
ℹ️  Monitor aggregation detected, will skip some flow validation steps
[azpcvnet] Creating namespace cilium-test for connectivity check...
[azpcvnet] Deploying echo-same-node service...
[azpcvnet] Deploying DNS test server configmap...
[azpcvnet] Deploying same-node deployment...
[azpcvnet] Deploying client deployment...
[azpcvnet] Deploying client2 deployment...
[azpcvnet] Deploying echo-other-node service...
[azpcvnet] Deploying other-node deployment...
[host-netns] Deploying byocni daemonset...
[host-netns-non-cilium] Deploying byocni daemonset...
[azpcvnet] Deploying echo-external-node deployment...
[azpcvnet] Waiting for deployments [client client2 echo-same-node] to become ready...
[azpcvnet] Waiting for deployments [echo-other-node] to become ready...
[azpcvnet] Waiting for CiliumEndpoint for pod cilium-test/client-6f6788d7cc-qzvl2 to appear...
[azpcvnet] Waiting for CiliumEndpoint for pod cilium-test/client2-bc59f56d5-7d9nx to appear...
[azpcvnet] Waiting for pod cilium-test/client-6f6788d7cc-qzvl2 to reach DNS server on cilium-test/echo-same-node-6d449fcc4-9r7wb pod...
[azpcvnet] Waiting for pod cilium-test/client2-bc59f56d5-7d9nx to reach DNS server on cilium-test/echo-same-node-6d449fcc4-9r7wb pod...
[azpcvnet] Waiting for pod cilium-test/client-6f6788d7cc-qzvl2 to reach DNS server on cilium-test/echo-other-node-5dbf9455cb-p2662 pod...
[azpcvnet] Waiting for pod cilium-test/client2-bc59f56d5-7d9nx to reach DNS server on cilium-test/echo-other-node-5dbf9455cb-p2662 pod...
[azpcvnet] Waiting for pod cilium-test/client-6f6788d7cc-qzvl2 to reach default/kubernetes service...
[azpcvnet] Waiting for pod cilium-test/client2-bc59f56d5-7d9nx to reach default/kubernetes service...
🏃 Running tests...
✅ All 42 tests (313 actions) successful, 12 tests skipped, 1 scenarios skipped.

Caveats/ Troubleshooting

  • If you add a nodepool with network plugins Azure CNI Dynamic IP or Azure CNI powered by Cilium and a different/new subnet for both pods and nodes has not been added, you will observe this error.
az aks nodepool add --resource-group azpcvnet --cluster-name azpcvnet --name azpcvnet --node-count 2 --os-sku AzureLinux --mode System
The behavior of this command has been altered by the following extension: aks-preview
(InvalidParameter) All or none of the agentpools should set podsubnet
Code: InvalidParameter
Message: All or none of the agentpools should set podsubnet
  • If you are deleting a nodepool in any of the above scenarios that have been explained, ensure that there is one nodepool that was created with --mode System else you will observe this error.
az aks nodepool delete \
    --resource-group azpcvnet \
    --cluster-name azpcvnet \
    --name nodepool1
The behavior of this command has been altered by the following extension: aks-preview
(OperationNotAllowed) There has to be at least one system agent pool.
Code: OperationNotAllowed
Message: There has to be at least one system agent pool.

Conclusion

Hopefully, this post gave you a good overview of installing or migrating your existing or new AKS clusters running Azure CNI powered by Cilium with Azure Linux and upgrading to Isovalent Enterprise for Cilium. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.

Try it out

Further Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Related

Cilium in Azure Kubernetes Service (AKS)

In this tutorial, users will learn how to deploy Isovalent Enterprise for Cilium on your AKS cluster from Azure Marketplace on a new cluster and also upgrade an existing cluster from an AKS cluster running Azure CNI powered by Cilium to Isovalent Enterprise for Cilium.

Cilium in Azure Kubernetes Service (AKS)
Amit Gupta

Enabling Enterprise features for Cilium in Azure Kubernetes Service (AKS)

In this tutorial, you will learn how to enable Enterprise features (Layer-3, 4 & 7 policies, DNS-based policies, and observe the Network Flows using Hubble-CLI) in an Azure Kubernetes Service (AKS) cluster running Isovalent Enterprise for Cilium.

Enabling Enterprise features for Cilium in Azure Kubernetes Service (AKS)
Amit Gupta

Industry insights you won’t delete. Delivered to your inbox weekly.