“Your mind is your software; always update and upgrade it to experience its functionality in a better version”- Mike Ssendikwanawa. As soon as I read that quote, the one thing that came to my mind was how you could have a single pane of glass that reduces the complexity of networking plugins. A unified approach would decrease the potential for human error in changes, whether the changes were done manually or via some automation at a large scale. Azure CNI powered by Cilium is that single pane of glass to which you can upgrade all your Azure network plugins.
As a part of this tutorial, you will learn how to upgrade existing clusters in Azure Kubernetes Service (AKS) using different network plugins (supported by Azure) to Azure CNI powered by Cilium.
What is Azure CNI powered by Cilium?
By making use of eBPF programs loaded into the Linux kernel, Azure CNI Powered by Cilium provides the following benefits:
- Functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins
- Faster service routing
- More efficient network policy enforcement
- Better observability of cluster traffic
- Support for clusters with more nodes, pods, and services.
How and What is Azure Networking?
In AKS, you can deploy a cluster that uses one of the following network models:
Kubenet networking
You can create AKS clusters using kubenet and create a virtual network and subnet. With kubenet, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is configured on the nodes, and pods receive an IP address hidden behind the node IP. This approach reduces the number of IP addresses you must reserve in your network space for pods.
Azure CNI networking
With Azure Container Networking Interface (CNI), every pod gets an IP address from the subnet and can be accessed directly. Systems in the same virtual network as the AKS cluster see the pod IP as the source address for any traffic from the pod. Systems outside the AKS cluster virtual network see the node IP as the source address for any traffic from the pod. These IP addresses must be unique across your network space and must be planned. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node is then reserved upfront for that node. This approach requires more planning and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow. Azure CNI networking has been further bifurcated into offerings you can opt for based on your requirements.
Azure CNI (Advanced) networking for Dynamic Allocation of IPs and enhanced subnet support
In Azure CNI (Legacy mode), every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node is then reserved upfront. This approach can lead to IP address exhaustion. To avoid these planning challenges, it is recommended that Azure CNI networking be enabled for the dynamic allocation of IPs and enhanced subnet support.
Azure CNI overlay networking
Azure CNI Overlay represents an evolution of Azure CNI, addressing scalability and planning challenges arising from assigning VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. The Azure VNet stack routes packets without encapsulation. Unlike Kubenet, where the traffic dataplane is handled by user-defined routes configured in the subnet’s route table, Azure CNI Overlay delegates this responsibility to Azure networking.
Azure CNI Powered by Cilium
In Azure CNI Powered by Cilium, Cilium leverages eBPF programs in the Linux kernel to accelerate packet processing for faster performance. Azure CNI powered by Cilium is a convenient “out-of-the-box” approach that gets you started with Cilium in an AKS environment. For advanced CIlium features, you can opt for Isovalent Enterprise for Cilium. Azure CNI powered by Cilium AKS clusters can be created (as explained above) in the following ways:
- Overlay Mode
- Dynamic IP allocation mode
Isovalent Enterprise for Cilium
Isovalent Enterprise for Cilium provides a one-click seamless upgrade from Azure CNI powered by Cilium. You can leverage the rich feature set (Network Policy, Encryption, Hubble-UI, Clustermesh, etc.) and get access to Microsoft and Isovalent support.
Bring your own CNI (BYOCNI)
AKS allows you to install any third-party CNI plugin like Cilium. You can install Cilium using BYOCNI Bring your own CNI feature and leverage the rich Enterprise feature set from Isovalent.
The Upgrade Matrix
There are key features provided by each plugin (as discussed above), but what stands out for Azure CNI powered by Cilium is that the users of Cilium benefit from the Azure CNI control plane with improved IP Address Management (IPAM) and even better integration into Azure Kubernetes Service (AKS) whereas users of AKS benefit from the feature set of Cilium. Providing an upgrade path is key to helping you in this progressive journey with Cilium.
Note–
- If you upgrade to Azure CNI powered by Cilium, your AKS clusters will no longer have Kube-proxy-based iptables implementation. Your clusters will be automatically migrated as a part of the upgrade process.
- Scenarios 1-3 below assume that Cilium was not installed on these clusters before the upgrade.
- Scenario 4 talks about upgrading from Legacy Azure IPAM with Cilium OSS to Azure CNI powered by Cilium. If you have more questions about it, contact sales@isovalent.com
- Scenario 5 talks about upgrading from Kubenet to Azure CNI powered by Cilium.
- Scenario 6 discusses upgrading from Kubenet to Azure CNI powered by Cilium (disabling Network Policy on an existing AKS cluster with Kubenet).
- Scenario 7 below briefly touches upon the Upgrade from Azure CNI powered by Cilium to Isovalent Enterprise for Cilium. Reading Isovalent in Azure Kubernetes Service is recommended to get full insights into Isovalent Enterprise for Cilium. Reach out to support@isovalent.com for any support-related queries.
- Upgrading from BYOCNI OSS to BYOCNI Isovalent Enterprise for Cilium (CEE) is available by contacting sales@isovalent.com
- Upgrading from Azure CNI OSS to Azure CNI Isovalent Enterprise for Cilium (CEE) is available by contacting sales@isovalent.com.
Pre-Requisites
The following prerequisites need to be taken into account before you proceed with this tutorial:
- An Azure account with an active subscription- Create an account for free
- Azure CLI version 2.48.1 or later. Run
az --version
to see the currently installed version. If you need to install or upgrade, see Install Azure CLI. - If using ARM templates or the REST API, the AKS API version must be 2022-09-02-preview or later.
- The kubectl command line tool is installed on your device. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is 1.26, you can use kubectl version 1.25, 1.26, or 1.27 with it. To install or upgrade kubectl, see Installing or updating kubectl.
- Subscription to Azure Monitor (Optional).
- Install Cilium CLI.
- Install Helm
Scenario 1: Upgrade an AKS cluster on Azure CNI Overlay to Azure CNI powered by Cilium
Create an Azure CNI Overlay cluster and upgrade it to Azure CNI powered by Cilium. This is optional if you have an existing cluster and can directly go to the upgrade section.
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
- Replace
SubscriptionName
with your subscription name. - You can also use your subscription ID instead of your subscription name.
AKS Resource Group Creation
Create a Resource Group
AKS Cluster creation
Create a cluster with Azure CNI Overlay and use the argument --network-plugin-mode
to specify an overlay cluster. If the pod CIDR isn’t specified, AKS assigns a default space: 10.244.0.0/16.
Output Truncated:
Set the Kubernetes Context
Log in to the Azure portal, browse to Kubernetes Services>, select the respective Kubernetes service created (AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.
Create a sample application.
- Use the sample manifest below for an application to see how the pod and node addresses are distinct.
- Apply the manifest
Upgrade the cluster to Azure CNI Powered by Cilium
You can update an existing cluster to Azure CNI Powered by Cilium if the cluster meets the following criteria:
- The cluster uses Azure CNI Overlay or Azure CNI with dynamic IP allocation. This does not include Azure CNI.
- The cluster does not have Azure Network Policy Manager (NPM) or Calico enabled.
- The cluster does not have any Windows node pools.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Kube-Proxy Before Upgrade
Notice the kube-proxy
daemonset is running.
Azure-CNS During Upgrade
Notice a pod that gets created with the prefix azure-cns-transition
. The job of CNS is to manage IP allocation for Pods per Node and serve requests from Azure IPAM. The way azure-CNS works for overlay is different from the way it works in the case of Azure CNI powered by Cilium and hence a transition pod is spun up to take care of the migration during the upgrade.
No Kube-Proxy After Upgrade
Post the upgrade, kube-proxy
daemonset is no longer there and Cilium completely takes over.
Scenario 2: Upgrade an AKS cluster on Azure CNI for Dynamic Allocation of IPs and enhanced subnet support to Azure CNI powered by Cilium
Create an Azure CNI cluster for Dynamic Allocation of IPs and subnet support and upgrade it to Azure CNI powered by Cilium. This is optional if you have an existing cluster and can directly go to the upgrade section.
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
- Replace SubscriptionName with your subscription name.
- You can also use your subscription ID instead of your subscription name.
AKS Resource Group creation
Create a Resource Group
AKS Network creation
Create a virtual network with a subnet for nodes and pods and retrieve the subnetID.
AKS Cluster creation
Create an AKS cluster referencing the node subnet --vnet-subnet-id
and the pod subnet using --pod-subnet-id
. Make sure to use the argument --network-plugin
as azure
.
Output Truncated:
Note–
IPs are allocated to nodes in batches of 16. Pod subnet IP allocation should be planned with a minimum of 16 IPs per node in the cluster; nodes will request 16 IPs on startup and another batch of 16 any time there are <8 IPs unallocated in their allotment.
Set the Kubernetes Context
Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.
Create a sample application.
- Use the sample manifest below for an application to see how the pod and node addresses are distinct.
- Apply the manifest
Upgrade the cluster to Azure CNI Powered by Cilium
You can update an existing cluster to Azure CNI Powered by Cilium if the cluster meets the following criteria:
- The cluster uses Azure CNI Overlay or Azure CNI with dynamic IP allocation. This does not include Azure CNI.
- The cluster does not have Azure NPM or Calico enabled.
- The cluster does not have any Windows node pools.
The upgrade process triggers each node pool to be re-imaged simultaneously. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Kube-Proxy Before Upgrade
Notice the kube-proxy
daemonset is running.
Azure CNS During Upgrade
Notice a pod that gets created with the prefix azure-cns-transition
. The job of CNS is to manage IP allocation for Pods per Node and serve requests from Azure IPAM. Azure-CNS works for Azure CNI (Dynamic Allocation), which is different from how it works in the case of Azure CNI powered by Cilium; hence, a transition pod is spun up to take care of the migration during the upgrade.
No Kube-Proxy After Upgrade
After the upgrade, the kube-proxy
daemonset is no longer there and Cilium completely takes over.
Scenario 3: Upgrade an AKS cluster on Azure CNI to Azure CNI powered by Cilium
This is a three-step upgrade. In this upgrade, an existing cluster on Azure CNI is upgraded to Azure CNI Overlay and eventually upgraded to Azure CNI powered by Cilium.
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
- Replace SubscriptionName with your subscription name.
- You can also use your subscription ID instead of your subscription name.
Step 1- Create a cluster on Azure CNI
Create an Azure CNI cluster and upgrade it to Azure CNI Overlay. If you have an existing cluster, this is optional; you can directly go to the upgrade section.
AKS Resource Group creation
Create a Resource Group
AKS Network creation
Create a virtual network with a subnet for nodes and retrieve the subnet ID.
AKS Cluster creation
Create an AKS cluster, and make sure to use the argument --network-plugin
as azure
.
Output Truncated:
Set the Kubernetes Context
Log in to the Azure portal, browse to Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.
Step 2- Upgrade the cluster to Azure CNI Overlay
You can update an existing Azure CNI cluster to Overlay if the cluster meets the following criteria:
- The cluster is on Kubernetes version 1.22+.
- Doesn’t use the dynamic pod IP allocation feature.
- Doesn’t have network policies enabled.
- Doesn’t use any Windows node pools with docker as the container runtime.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Step 3- Upgrade the cluster to Azure CNI Powered by Cilium
You can update an existing cluster to Azure CNI Powered by Cilium if the cluster meets the following criteria:
- The cluster uses Azure CNI Overlay or Azure CNI with dynamic IP allocation. This does not include Azure CNI.
- The cluster does not have Azure NPM or Calico enabled.
- The cluster does not have any Windows node pools.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Kube-Proxy Before Upgrade
Notice the kube-proxy
daemonset is running.
Azure-CNS During Upgrade
Notice a pod that gets created with the prefix azure-cns-transition
. The job of CNS is to manage IP allocation for Pods per Node and serve requests from Azure IPAM. The way azure-CNS works for overlay is different from the way it works in the case of Azure CNI powered by Cilium, and hence a transition pod is spun up to take care of the migration during the upgrade.
No Kube-Proxy After Upgrade
After the upgrade, the kube-proxy daemonset is no longer there, and Cilium completely takes over.
Scenario 4: Upgrade an AKS cluster on Legacy Azure IPAM with Cilium OSS to Azure CNI powered by Cilium
This is a three-step upgrade. In this upgrade, an existing cluster on Legacy Azure IPAM with Cilium OSS is upgraded to Azure CNI Overlay and eventually upgraded to Azure CNI powered by Cilium.
Note- There could be a potential loss in services and deployments, so it would be recommended that you contact support@isovalent.com before you proceed with this upgrade. They can advise you on the migration path.
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
- Replace SubscriptionName with your subscription name.
- You can also use your subscription ID instead of your subscription name.
Step 1- Create a cluster on Legacy Azure IPAM
Create a cluster on Legacy Azure IPAM and upgrade it to Azure CNI Overlay. This is optional if you have an existing cluster and can directly go to the upgrade section.
AKS Resource Group creation
Create a Resource Group
AKS Cluster creation
Create an AKS cluster and make sure to use the argument --network-plugin
as azure
.
Output Truncated:
Set the Kubernetes Context
Log in to the Azure portal, browse to Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.
Create a Service Principal:
To allow the cilium-operator to interact with the Azure API, a Service Principal with Contributor privileges over the AKS cluster is required (see Azure IPAM required privileges for more details). It is recommended to create a dedicated Service Principal for each Cilium installation with minimal privileges over the AKS node resource group:
Note- The AZURE_NODE_RESOURCE_GROUP
is the MC_* Nodegroup that contains all of the infrastructure resources associated with the cluster.
Setup Helm repository:
Add the Cilium repo
Install Cilium
Step 2- Upgrade the cluster to Azure CNI Overlay
You can update an existing Azure CNI cluster to Overlay if the cluster meets the following criteria:
- The cluster is on Kubernetes version 1.22+.
- Doesn’t use the dynamic pod IP allocation feature.
- Doesn’t have network policies enabled.
- Doesn’t use any Windows node pools with docker as the container runtime.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Step 3- Upgrade the cluster to Azure CNI powered by Cilium
You can update an existing cluster to Azure CNI Powered by Cilium if the cluster meets the following criteria:
- The cluster uses Azure CNI Overlay or Azure CNI with dynamic IP allocation. This does not include Azure CNI.
- The cluster does not have Azure NPM or Calico enabled.
- The cluster does not have any Windows node pools.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Kube-Proxy Before Upgrade
Notice the kube-proxy
daemonset is running.
Azure-CNS During Upgrade
Notice a pod that gets created with the prefix azure-cns-transition
. The job of CNS is to manage IP allocation for Pods per Node and serve requests from Azure IPAM. The way azure-CNS works for overlay is different from the way it works in the case of Azure CNI powered by Cilium, and hence a transition pod is spun up to take care of the migration during the upgrade.
No Kube-Proxy After Upgrade
After the upgrade, the kube-proxy daemonset is no longer there, and Cilium completely takes over.
Scenario 5: Kubenet to Azure CNI powered by Cilium
This is a three-step upgrade. In this upgrade, an existing cluster on Kubenet is upgraded to Azure CNI Overlay and eventually upgraded to Azure CNI powered by Cilium.
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
- Replace SubscriptionName with your subscription name.
- You can also use your subscription ID instead of your subscription name.
Step 1- Create a cluster on Kubenet
Create a cluster on Kubenet and upgrade it to Azure CNI Overlay. This is optional if you have an existing cluster and can directly go to the upgrade section.
AKS Resource Group creation
Create a Resource Group
Create a Managed Identity
Create a Managed Identity for the AKS cluster. Take note of the principalId
as that would be required while assigning a role for the managed identity.
AKS Network creation
Create a virtual network with a subnet for nodes and retrieve the subnet ID.
Role assignment for User-Managed Identity
Assign a network contributor
role for the user-managed identity using principalId in the previous step. Also, fetch the subnet ID.
AKS Cluster creation
Create an AKS cluster with managed identities.
Output Truncated:
Set the Kubernetes Context
Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.
Step 2- Upgrade the cluster to Azure CNI Overlay
You can update an existing Azure CNI cluster to Overlay if the cluster meets the following criteria:
- The cluster is on Kubernetes version 1.22+.
- Doesn’t use the dynamic pod IP allocation feature.
- Doesn’t have network policies enabled.
- Doesn’t use any Windows node pools with docker as the container runtime.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Step 3- Upgrade the cluster to Azure CNI powered by Cilium
You can update an existing cluster to Azure CNI Powered by Cilium if the cluster meets the following criteria:
- The cluster uses Azure CNI Overlay or Azure CNI with dynamic IP allocation. This does not include Azure CNI.
- The cluster does not have Azure NPM or Calico enabled.
- The cluster does not have any Windows node pools.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Kube-Proxy Before Upgrade
Notice the kube-proxy
daemonset is running.
Azure-CNS During Upgrade
Notice a pod that gets created with the prefix azure-cns-transition
. The job of CNS is to manage IP allocation for Pods per Node and serve requests from Azure IPAM. The way azure-CNS works for overlay is different from the way it works in the case of Azure CNI powered by Cilium, and hence a transition pod is spun up to take care of the migration during the upgrade.
No Kube-Proxy After Upgrade
After the upgrade, the kube-proxy daemonset is no longer there, and Cilium completely takes over.
Scenario 6: Kubenet to Azure CNI powered by Cilium (disabling Network Policy)
This is a three-step upgrade. In this upgrade, an existing cluster on Kubenet with network policy (Calico) is upgraded to Azure CNI Overlay and eventually upgraded to Azure CNI powered by Cilium.
Note–
- The uninstall process does not remove Calico’s Custom Resource Definitions (CRDs) and Custom Resources (CRs). These CRDs and CRs all have names ending with either “projectcalico.org” or “tigera.io.” These CRDs and associated CRs can be manually deleted after Calico is successfully uninstalled (deleting the CRDs before removing Calico breaks the cluster).
- The upgrade will not remove any NetworkPolicy resources in the cluster, but after uninstalling them, these policies will no longer be enforced.
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
- Replace SubscriptionName with your subscription name.
- You can also use your subscription ID instead of your subscription name.
Step 1- Create a cluster on Kubenet
Create a cluster on Kubenet and upgrade it to Azure CNI Overlay. If you have an existing cluster, this is optional; you can directly go to the upgrade section.
AKS Resource Group creation
Create a Resource Group
Create a Managed Identity
Create a Managed Identity for the AKS cluster. Take note of the principalId
which will be required while assigning a role for the managed identity.
AKS Network creation
Create a virtual network with a subnet for nodes and retrieve the subnet ID.
Role assignment for User-Managed Identity
Assign a network contributor
role for the user-managed identity using principalId in the previous step. Also, fetch the subnet ID.
AKS Cluster creation
Create an AKS cluster with managed identities and network policy set to calico
.
Output Truncated:
Set the Kubernetes Context
Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created ( AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.
Step 2- Upgrade the cluster to Azure CNI Overlay
You can update an existing Azure CNI cluster to Overlay if the cluster meets the following criteria:
- The cluster is on Kubernetes version 1.22+.
- Doesn’t use the dynamic pod IP allocation feature.
- Doesn’t have network policies enabled.
- Doesn’t use any Windows node pools with docker as the container runtime.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Note- Make sure that network-policy is set to none
.
Output Truncated:
Note-
- The uninstall process does not remove Custom Resource Definitions (CRDs) and Custom Resources (CRs) used by Calico.
- These CRDs and CRs all have names ending with either “projectcalico.org” or “tigera.io.” These CRDs and associated CRs can be manually deleted after Calico is successfully uninstalled (deleting the CRDs before removing Calico breaks the cluster).
- The upgrade will not remove any NetworkPolicy resources in the cluster, but after uninstalling them, these policies will no longer be enforced.
Step 3- Upgrade the cluster to Azure CNI powered by Cilium
You can update an existing cluster to Azure CNI Powered by Cilium if the cluster meets the following criteria:
- The cluster uses Azure CNI Overlay or Azure CNI with dynamic IP allocation. This does not include Azure CNI.
- The cluster does not have Azure NPM or Calico enabled.
- The cluster does not have any Windows node pools.
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn’t supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade, where each node in a node pool is re-imaged.
Output Truncated:
Kube-Proxy Before Upgrade
Notice the kube-proxy
daemonset is running.
Azure-CNS During Upgrade
Notice a pod that gets created with the prefix azure-cns-transition
. The job of CNS is to manage IP allocation for Pods per Node and serve requests from Azure IPAM. The way azure-CNS works for overlay is different from the way it works in the case of Azure CNI powered by Cilium, and hence a transition pod is spun up to take care of the migration during the upgrade.
No Kube-Proxy After Upgrade
After the upgrade, the kube-proxy daemonset is no longer there, and Cilium completely takes over.
Scenario 7: Upgrade to Isovalent Enterprise for Cilium
You can upgrade your AKS cluster in all the above 5 scenarios to Isovalent Enterprise for Cilium. For the brevity of the tutorial, you can follow one such upgrade scenario as described.
You can follow this blog and the steps to upgrade an existing AKS cluster to Isovalent Enterprise for Cilium.
- In the Azure portal, search for Marketplace on the top search bar. In the results, under Services, select Marketplace.
- Type ‘Isovalent’ In the search window and select the offer.
- On the Plans + Pricing tab, select an option. Ensure that the terms are acceptable, and then select Create.
- Select the resource group in which the cluster exists that we will be upgraded.
- Click Create New Dev Cluster, select ‘No,’ and click Next: Cluster Details.
- As ‘No’ was selected, this will upgrade an existing cluster in that region.
- The name for the AKS cluster will be auto-populated by clicking on the drop-down selection.
- Click ‘Next: Review + Create’ Details.
- Once Final validation is complete, click ‘Create’
- When the application is deployed, the portal will show ‘Your deployment is complete’, along with deployment details.
Failure Scenario
Upgrade from Azure CNI to Azure CNI powered by Cilium
As you learned in Scenario 3, an AKS cluster on Azure CNI can be upgraded to Azure CNI powered by Cilium via a 3-step procedure. In case you attempt to upgrade directly, you will see error messages-
In the example below, a user attempts to upgrade an AKS cluster (clusterName=azurecni
) on Azure CNI to Azure CNI powered by Cilium and is rightly prompted with the error that it’s impossible to proceed.
Validation- Isovalent Enterprise for Cilium
Cluster status check
Check the status of the nodes and make sure they are in a “Ready” state
Validate Cilium version
Check the version of cilium with cilium version
:
Cilium Health Check
cilium-health
is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. You can check node-to-node health with cilium-health status:
Cilium Connectivity Test
The Cilium connectivity test deploys a series of services and deployments, and CiliumNetworkPolicy will use various connectivity paths to connect. Connectivity paths include with and without service load-balancing and various network policy combinations.
The cilium connectivity test was run for all of the above scenarios, and the tests were passed successfully. A truncated output for one such test result:
Output Truncated:
Azure Monitor (Optional)
When critical applications and business processes rely on Azure resources, you want to monitor those resources for their availability, performance, and operation. You can monitor data generated by AKS and analyze it with Azure Monitor.
- Login to the Azure portal
- Select the respective resource group where the AKS cluster has been created.
- Select Monitoring > Insights > Configure Monitoring
- Select Enable Container Logs
Azure Monitor in action
- Creation of an
azure-cns-transition pod
- Deletion of a node during the upgrade process
- Removal of a surge node
- Reimage of a node during the upgrade process
Events
When you upgrade your cluster, the following Kubernetes events may occur on each node:
- Surge: Creates a surge node.
- Drain: Evicts pods from the node. Each pod has a 30-second timeout to complete the eviction.
- Update: An update of a node succeeds or fails.
- Delete: Deletes a surge node.
Use kubectl get events
to show events in the default namespaces while running an upgrade.
Activity Logs
The Azure Monitor activity log is a platform log in Azure that provides insight into subscription-level events. The activity log includes information like when a resource is modified, or a virtual machine is started.
- Upgrade alert (Kubernetes Services > Activity Log) that indicates a cluster upgrade from Azure CNI to Azure CNI powered by Cilium.
- You can also take a look at the change analysis ( Kubernetes Services > Activity Log > Changed Properties) for diving deeper into the change set.
Conclusion
Hopefully, this tutorial gave you a good overview of how to upgrade your existing AKS clusters in Azure to Azure CNI powered by Cilium. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.
Try it Out
- Try out Azure CNI powered by Cilium.
- Try out Isovalent Enterprise for Cilium on the Azure marketplace.
Further Reading
- Tutorial on installing an AKS cluster running Isovalent Enterprise for Cilium from Azure Marketplace
- Tutorial on installing an AKS cluster running Azure CNI powered by Cilium
- Cilium on AKS in Bring Your Own CNI mode.
- Azure and Isovalent main partner page
Amit Gupta is a senior technical marketing engineer at Isovalent, powering eBPF cloud-native networking and security. Amit has 21+ years of experience in Networking, Telecommunications, Cloud, Security, and Open-Source. He has previously worked with Motorola, Juniper, Avi Networks (acquired by VMware), and Prosimo. He is keen to learn and try out new technologies that aid in solving day-to-day problems for operators and customers.
He has worked in the Indian start-up ecosystem for a long time and helps new folks in that area outside of work. Amit is an avid runner and cyclist and also spends considerable time helping kids in orphanages.