
“At any given moment, you have the power to say: this is not how the story is going to end.” —Christine Mason Miller
With that, we resume where we left Part-1 of our blog while Deploying Isovalent Enterprise for Cilium from Azure Marketplace. In this tutorial, users will learn how to enable advanced features provided by Isovalent using Azure IaaC (Infrastructure as a Code) tools namely ARM (Azure Resource Manager) templates & Azure CLI from the Azure Marketplace.
With Azure CNI Powered by Cilium, AKS is now natively powered by Cilium. Azure CNI Powered by Cilium combines the robust control plane of Azure CNI with the dataplane of Cilium to provide high-performance networking and security.
AKS customers will also benefit from a seamless one-click upgrade experience from Azure CNI Powered by Cilium to Isovalent Enterprise for Cilium platform. The enterprise platform is available in the Azure Container Marketplace and makes the complete Cilium feature set available to Azure customers. This includes security and governance controls, extended network capabilities, the complete set of Isovalent Enterprise features, and more!
The tight integration into the Azure platform simplifies operations by enabling auto-upgrades and natively integrating into the Azure ecosystem for SIEM export, monitoring, and governance control. The unified billing experience will eliminate management overhead. Finally, the support collaboration will maximize the reliability and customer experience of the platform.
What is Isovalent Enterprise for Cilium
Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.
Why Isovalent Enterprise for Cilium
For enterprise customers requiring support and/or usage of Advanced Networking, Security, and Observability features, “Isovalent Enterprise for Cilium” is recommended.
This offering brings complete flexibility in terms of access to Cilium features while retaining the advantageous ease of use and integration with Azure seamlessly.
Prerequisites
- AKS Cluster is up and running with Azure CNI powered by Cilium
- Azure CLI version 2.41.0 or later. Run az –version to see the currently installed version. If you need to install or upgrade, see Install Azure CLI.
- Templates are JavaScript Object Notation (JSON) files. To create templates, you need a good JSON editor. We recommend Visual Studio Code with the Azure Resource Manager Tools extension. If you need to install these tools, see Quickstart: Create ARM templates with Visual Studio Code.
- The kubectl command line tool is installed on your device. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is 1.26, you can use kubectl version 1.25, 1.26, or 1.27 with it. To install or upgrade kubectl, see Installing or updating kubectl.
Register resource providers
Before you deploy a container offer, you must register the Microsoft.ContainerService
and Microsoft.KubernetesConfiguration
providers on your subscription by using the az provider register command:
Deploying a Kubernetes Application
There are three ways to deploy a Kubernetes application on an AKS cluster running Isovalent Enterprise for Cilium:
- Azure Marketplace
- Azure ARM Template
- Azure CLI
Azure ARM Template
With the move to the cloud, many teams have adopted agile development methods. These teams iterate quickly. They need to repeatedly deploy their solutions to the cloud, and know their infrastructure is in a reliable state.
To implement infrastructure as code for your Azure solutions, you can use Azure Resource Manager (ARM) templates. The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.
To deploy a Kubernetes application programmatically through ARM templates, a user needs to select the Kubernetes application and settings, generate an ARM template, accept legal terms and conditions, and finally deploy the ARM template.
Select Kubernetes Application
First, you need to select the Kubernetes application that you want to deploy/upgrade in the Azure portal.
- In the Azure portal, search for Marketplace on the top search bar. In the results, under Services, select Marketplace.
- You can search for an application or publisher directly by name, or you can browse all applications. To find Kubernetes application applications, on the left side under Categories select Containers.

Generate ARM Template
- In the search window type “Isovalent” and select the application.

- On the Plans + Pricing tab, select an option. Ensure that the terms are acceptable, and then select Create.
- Select the respective subscription in which the existing AKS cluster has already been created with Azure CNI powered by Cilium.
- Select the resource group to deploy the cluster in.
- Click on Create New Dev Cluster, select “No” and click on Next: Cluster Details.

Note- In case the user would like to create a new AKS cluster, the user needs to click on resource group and then select the option “Create New” under Resource Group. Also, if the user would like to create a new cluster with Isovalent Enterprise for Cilium, the user needs to select the option “Yes” at the “Create new dev cluster” step.
- The AKS cluster name will be available through the drop-down and the user needs to click on “Next: Review + Create”

- Once Final validation is complete, click > “View Automation Template” > click “Download”


- If all the validations have passed, you’ll see the ARM template in the editor.

- Download the ARM template and save it to a location on your computer.
- The user needs to note the following values which will be used later on in this tutorial
Plan-publisher
Plan-offerID
andPlan-name
- The user needs to exit out of the Azure portal screen and not Click on Create in the UI workflow as ARM templates will now take over the part of installing the “Cilium” extension and enabling the Enterprise features.
Editing the json file
- The user can unzip the ARM template file which will include a parameters.json file and edit the following fields:
- ClusterResourceName– The description should be the name of the AKS cluster that is either being created or an existing AKS cluster running on Azure CNI powered by Cilium.
- createNewCluster– Default value is set to false which means that there is an existing AKS cluster running on Azure CNI powered by Cilium which will be upgraded to Isovalent Enterprise for Cilium. The user can turn it to True in case they are creating a new cluster on Isovalent Enterprise for Cilium.
- extensionResourceName– The description should be the cilium extension and should not be edited.
- location– Description of the location of the AKS resource should be the location where the AKS resource has been created/ upgraded.
Accept terms and agreements
Before you can deploy a Kubernetes application, you need to accept its terms and agreements. To do so, use Azure CLI or Azure PowerShell. For this tutorial, we will be using Azure CLI to deploy the ARM template.
Be sure to use the values you copied prior to downloading the ARM template for plan-publisher
, plan-offerID
, and plan-name
in your command.
Values.yaml file
- To enable Enterprise features as we have listed above that will be enabled as a part of this tutorial, the user can create a sample yaml file as shown in the example below:
Setting Enterprise Values for the ARM template
- A user can download yq and inject custom helm values into the ARM template as Azure ARM templates follow a
key:value
pair syntax notation.- Users can use a tool of their choice as well to inject helm values.
- The output of this will be a json file which will be used to deploy the ARM template. Sample json file would look like as shown below which the user has to copy.
Note– The output below is truncated.
Note-
- When hubble is installed, it creates pods in the kube-system namespace by default. In case you have a SIEM like Microsoft Defender, you can see an error similar to “New container ‘hubble-relay’ in the kube-system namespace detected“.
- It is an ideal installation process for Cilium and Hubble components. With the default alerting rules, these errors are expected. Cilium and Hubble’s components use best practices to run within the kube-system namespace within the cluster.
- Adding an exception for such events on Microsoft Defender’s alerting rules can help in mitigating the errors. For such modifications, you can get in touch with Microsoft Support. This is true for any alternative External Attack Surface Management tools.
Deploy ARM Template
- To start working with ARM templates, sign in with your Azure credentials in Azure CLI.
- If you have multiple Azure subscriptions, choose the subscription you want to use.
- Replace
SubscriptionName
with your subscription name. - You can also use your subscription ID instead of your subscription name.
- Replace
- To deploy the template, use either Azure CLI or Azure PowerShell. Use the resource group you created while creating the cluster on Azure CNI powered by Cilium.
- Template-File type is “json”.
- The deployment command returns results. Look for
ProvisioningState
to see whether the deployment succeeded.
Note– The output below is truncated.
Verify Deployment
- You can verify the deployment by exploring the resource group from the Azure portal.
- Sign in to the Azure portal.
- From the left menu, select Resource Groups.
- Check the box to the left of the resource group in which the AKS cluster resides.
- Select the resource group you used in the earlier procedure.
- You can also check the extensions installed on the cluster from Azure Portal. On the AKS cluster, go to the “Extensions + applications” menu to verify the settings.

- Optional– Log in to the Azure portal and browse to Kubernetes Services> select the respective Kubernetes service that was created ( AKS Cluster) and click on connect. This will help end users connect to their AKS cluster and set the respective Kubernetes context.
- Verify the deployment by using the following command to list the extension and features that were enabled on your cluster after the successful deployment of the ARM template:
- Validating the version of Isovalent Enterprise for Cilium on your newly created cluster
Azure CLI
A user can create extension instances in an AKS cluster, setting required and optional parameters including options related to updates and configurations. You can also view, list, update, and delete extension instances.
Prerequisites
- An Azure subscription.
- While creating the extension- Users need to make sure that they have an AKS Cluster up and running with Azure CNI powered by Cilium
- While updating the extension- Users need to make sure that they have an AKS Cluster up and running with Isovalent Enterprise for Cilium from the Azure marketplace.
- The
Microsoft.ContainerService
andMicrosoft.KubernetesConfiguration
resource providers must be registered on your subscription. - To register these providers, run the following command:
- The latest version of the
k8s-extension
Azure CLI extensions. Install the extension by running the following command:
- If the extension is already installed, make sure you’re running the latest version by using the following command:
Select the Kubernetes application
Users need to select the Kubernetes application that they want to deploy in the Azure portal. Users will also need to copy some of the details for later use.
- In the Azure portal, go to the Marketplace page.
- Select the Kubernetes application.
- Select the required plan.
- Select the Create button.
- Fill out all the application (extension) details.
- In the Review + Create tab, select Download a template for automation. If all the validations are passed, you’ll see the ARM template in the editor.
- Examine the ARM template:
- In the variables section, copy the
plan-name
,plan-publisher
,plan-offerID
, andclusterExtensionTypeName
values for later use. - In the resource
Microsoft.KubernetesConfiguration/extensions
‘ section, copy theconfigurationSettings
section for later use.
- In the variables section, copy the
Accept terms and agreements
- Before you can deploy a Kubernetes application, you need to accept its terms and agreements. To do so, use Azure CLI or Azure PowerShell. In this section, we will be using Azure CLI to deploy the ARM template.
- Be sure to use the values you copied prior to downloading the ARM template for
plan-publisher
,plan-offerID
, andplan-name
in your command.
Setting the correct Context
Log in to the Azure portal and browse to Kubernetes Services> select the respective Kubernetes service that was created ( AKS Cluster) and click on connect. This will help end users connect to their AKS cluster and set the respective Kubernetes context.
Creating the extension
Users can create a new extension instance with k8s-extension create
, passing in values for the mandatory parameters.
Update the extension
Users can update an existing extension (Cilium) installed on an AKS cluster with Isovalent Enterprise for Cilium.
- Listing an existing extension
- Updating
k8s-extension
and enabling the enterprise features.- Users will be prompted for a prompt while enabling these features and the user should select Yes.
Verifying the extension
Verify the k8s-extension
by using the following command to list the extension and features that were enabled on your cluster using Azure CLI.
Benefits of Isovalent Enterprise for Cilium
Isovalent Enterprise provides a range of advanced enterprise features that we will demonstrate in this tutorial.
Note-
- Features described below have been enabled using an ARM template or Azure CLI.
- Users can get in touch with their partner Sales/SE representative(s) at sales@isovalent.com for more detailed insights into the below-explained features and get access to the requisite documentation and hubble CLI software images.
- Features such as Ingress and IPsec Encryption will be available shortly in an upcoming release.
1. Layer 3/ Layer4 Policy
When using Cilium, endpoint IP addresses are irrelevant when defining security policies. Instead, you can use the labels assigned to the pods to define security policies. The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster.
The layer 3 policy establishes the base connectivity rules regarding which endpoints can talk to each other.
Layer 4 policy can be specified in addition to layer 3 policies or independently. It restricts the ability of an endpoint to emit and/or receive packets on a particular port using a particular protocol.
Users can take the example of a Star Wars-inspired example, in which there are three microservices applications: deathstar, tiefighter, and xwing. The deathstar runs an HTTP web service on port 80, which is exposed as a Kubernetes Service to load-balance requests to deathstar across two pod replicas. The deathstar service provides landing services to the empire’s spaceships so that they can request a landing port. The tiefighter pod represents a landing-request client service on a typical empire ship and xwing represents a similar service on an alliance ship. They exist so that we can test different security policies for access control to deathstar landing services.
Validate L3/ L4 Policies
- Deploy three services deathstar, xwing and firefighter
- Kubernetes will deploy the pods and service in the background.
- Running
kubectl get pods,svc
will inform you about the progress of the operation.
- Check basic access
- From the perspective of the deathstar service, only the ships with label
org=empire
are allowed to connect and request landing. Since we have no rules enforced, both xwing and tiefighter will be able to request landing.
- From the perspective of the deathstar service, only the ships with label
- We’ll start with the basic policy restricting deathstar landing requests to only the ships that have a label
org=empire
. This will not allow any ships that don’t have theorg=empire
label to even connect with the deathstar service. This is a simple policy that filters only on IP protocol (network layer 3) and TCP protocol (network layer 4), so it is often referred to as an L3/L4 network security policy. - The above policy whitelists traffic sent from any pods with label
org=empire
to deathstar pods with labelorg=empire, class=deathstar
on TCP port 80. - Users can now apply this L3/L4 policy:
- Now if we run the landing requests, only the tiefighter pods with the label org=empire will succeed. The xwing pods will be blocked.
- Now the same request run from an xwing pod will fail:
This request will hang, so press Control-C to kill the curl request, or wait for it to time out.
2. HTTP-aware L7 Policy
Layer 7 policy rules are embedded into Layer 4 Examples rules and can be specified for ingress and egress. A layer 7 request is permitted if at least one of the rules matches. If no rules are specified, then all traffic is permitted. If a layer 4 rule is specified in the policy, and a similar layer 4 rule with layer 7 rules is also specified, then the layer 7 portions of the latter rule will have no effect.
Note-
- For wireguard encryption, l7Proxy is set to False and hence it is recommended that users should enable the same by updating the ARM template or via Azure CLI.
- This will be available in an upcoming release.
In order to provide the strongest security (i.e., enforce least-privilege isolation) between microservices, each service that calls deathstar’s API should be limited to making only the set of HTTP requests it requires for legitimate operation.
For example, consider that the deathstar service exposes some maintenance APIs which should not be called by random empire ships.
Cilium is capable of enforcing HTTP-layer (i.e., L7) policies to limit what URLs the firefighter pod is allowed to reach.
Validate L7 Policy
- Apply L7 Policy- Here is an example policy file that extends our original policy by limiting tiefighter to making only a POST /v1/request-landing API call, but disallowing all other calls (including PUT /v1/exhaust-port).
- Update the existing rule (from the L3/L4 section) to apply L7-aware policy to protect deathstar
- Users can re-run a curl towards deathstar & exhaust-port
- As this rule builds on the identity-aware rule, traffic from pods without the label
org=empire
will continue to be dropped causing the connection to time out:
3. DNS-Based Policies
DNS-based policies are very useful for controlling access to services running outside the Kubernetes cluster. DNS acts as a persistent service identifier for both external services provided by Google, etc., and internal services such as database clusters running in private subnets outside Kubernetes. CIDR or IP-based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently. The Cilium DNS-based policies provide an easy mechanism to specify access control while Cilium manages the harder aspects of tracking DNS to IP mapping.
Validate DNS-Based Policies
- In line with our Star Wars theme examples, we will use a simple scenario where the Empire’s
mediabot
pods need access to GitHub for managing the Empire’s git repositories. The pods shouldn’t have access to any other external service.
- Apply DNS Egress Policy- The following Cilium network policy allows
mediabot
pods to only accessapi.github.com
- Testing the policy, we see that
mediabot
has access toapi.github.com
but doesn’t have access to any other external service, e.g.,support.github.com
This request will hang, so press Control-C to kill the curl request, or wait for it to time out.
4. Combining DNS, Port, and L7 Rules
The DNS-based policies can be combined with port (L4) and API (L7) rules to further restrict access. In our example, we will restrict mediabot
pods to access GitHub services only on port 443. The toPorts
section in the policy below achieves the port-based restrictions along with the DNS-based policies.
Validate the combination of DNS, Port and L7-based Rules
- Applying the policy
- Testing, the access to https://support.github.com on port 443 will succeed but the access to http://support.github.com on port 80 will be denied.
5. Observing Network Flows with Hubble CLI (modern-day Wireshark)
Hubble’s CLI extends the visibility that is provided by standard kubectl commands like kubectl get pods
to give you more network-level details about a request, such as its status and the security identities associated with its source and destination.
The Hubble CLI can be leveraged for observing network flows from Cilium agents. Users can observe the flows from their local machine workstation for troubleshooting or monitoring. For this tutorial, users can see that all hubble outputs are related to the tests that are done above. Users can try other tests and see the same results with different varying values as expected.
Setup Hubble Relay Forwarding
Users can use kubectl port forward to hubble-relay, then edit the hubble config to point at the remote hubble server component.
Hubble Status
Hubble status can check the overall health of Hubble within your cluster. If using Hubble Relay, a counter for the number of connected nodes will appear in the last line of the output.
View Last N Events
Hubble observe
displays the most recent events based on the number filter. Hubble Relay will display events over all the connected nodes:
Follow Events in Real-Time
Hubble observe --follow
will follow the event stream for all connected clusters.
Troubleshooting HTTP & DNS
- If a user has a CiliumNetworkPolicy that enforces DNS or HTTP policy, we can use the –type l7 filtering options for hubble to check the HTTP methods and DNS resolution attempts of our applications.
- Users can use
--http-status
to view specific flows with 200 HTTP responses
- Users can also just show HTTP PUT methods with
--http-method
- To view DNS traffic for a specific FQDN, users can use the
--to-fqdn
flag
Filter by Verdict
Hubble provides a field called VERDICT that displays one of FORWARDED
, ERROR
, or DROPPED
for each flow. DROPPED
could indicate an unsupported protocol within the underlying platform or Network Policy enforcing pod communication. Hubble is able to introspect the reason for ERROR
or DROPPED
flows and display the reason within the TYPE field of each flow.
Filter by Pod or Namespace
- To show all flows for a specific pod, users can filter with the
--pod
flag
- If users are only interested in traffic from a pod to a specific destination, we combine
--from-pod
and--to-pod
- If users want to see all traffic from a specific namespace, we specify the
--from-namespace
Filter Events with JQ
To view filter events through the jq tool we can swap the output to json
mode. Then when we visualize our metadata through jq, we see more metadata around the workload labels like pod name/namespace assigned to both source and destination. This information is accessible by Cilium because it is encoded in the packets based on pod identities.
6. Encryption
Cilium supports the transparent encryption of Cilium-managed host traffic and traffic between Cilium-managed endpoints either using WireGuard® or IPsec. In this tutorial, we will be talking about Wireguard.
Note-
For wireguard encryption, l7Proxy is set to False and hence it is recommended that users should disable the same by updating the ARM template or via Azure CLI.
Wireguard
When WireGuard is enabled in Cilium, the agent running on each cluster node will establish a secure WireGuard tunnel between it and all other known nodes in the cluster.
Packets are not encrypted when they are destined to the same node from which they were sent. This behavior is intended. Encryption would provide no benefits in that case, given that the raw traffic can be observed on the node anyway.
Validate Wireguard Encryption
- To demonstrate Wireguard encryption, users can create a client pod that is spun up on one node and a server pod that is spun up on another node in AKS.
- The client is doing a “wget” towards the server every 2 seconds.
- Run a
bash
shell in one of the Cilium pods withkubectl -n kube-system exec -ti ds/cilium -- bash
and execute the following commands:- Check that WireGuard has been enabled (number of peers should correspond to a number of nodes subtracted by one):
- Install tcpdump on the node where the server pod has been created.
- Check that traffic (HTTP requests and responses) is sent via the
cilium_wg0
tunnel device on the node where the server pod has been created:
7. Kube-Proxy Replacement
One of the additional benefits of using Cilium is its extremely efficient data plane. It’s particularly useful at scale, as the standard kube-proxy is based on a technology – iptables – that was never designed with the churn and the scale of large Kubernetes clusters.
Validate Kube-Proxy Replacement
- Users can first validate that the Cilium agent is running in the desired mode with kube-proxy set to Strict:
- Users can deploy nginx pods, create a new NodePort service and validate that Cilium installed the service correctly.
- The following yaml is used for the backend pods:
- Verify that the NGINX pods are up and running:
- Users can create a NodePort service for the instances:
- Users can verify that the NodePort service has been created:
- With the help of the
cilium service list
command, we can validate that Cilium’s eBPF kube-proxy replacement created the new NodePort services under port 31076: (Truncated O/P)
- At the same time users can verify, using
iptables
in the host namespace (on the node), that noiptables
rule for the service is present:
- Last but not least, a simple
curl
test shows connectivity for the exposed NodePort port31076
as well as for the ClusterIP:
Conclusion
Hopefully, this post gave you a good overview of how you would enable advanced features like Layer-3, 4 & 7 policies, DNS-based policies, and observing the Network Flows using Hubble-CLI with Isovalent Enterprise for Cilium in the Azure marketplace.
If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.
Further Reading
- Isovalent Enterprise for Cilium- General Availability
- Isovalent Enterprise for Cilium- Azure Marketplace
- Azure and Isovalent main partner page
- Isovalent Enterprise for Cilium now Available on Microsoft Azure Marketplace
- Deploying Isovalent Enterprise for Cilium from Azure Marketplace
- Microsoft and Isovalent partnership announcement about bringing eBPF-based Networking and Security to Azure
- You can also read more about Azure CNI powered by Cilium in our announcement blog post and don’t forget to follow our tutorial
- Isovalent Enterprise for Cilium on Microsoft Azure Marketplace
- Kubecon updates

Amit Gupta is a Senior Technical Marketing Engineer at Isovalent that is powering eBPF cloud-native networking and security. Amit has 20+ years of experience in Networking, Telecommunications, Cloud, Security, and Open-Source and has worked in the past with companies like Motorola, Juniper, Avi Networks (acquired by VMware), and Prosimo. He is keen to learn and try out new technologies that aid in solving day-to-day problems for operators and customers.
He has worked in the Indian start-up ecosystem for a long time and helps new folks in that area in his time outside of work. Amit is an avid runner and cyclist and also spends a considerable amount of time helping kids in orphanages.