Back to blog

Cilium on AKS using ARM templates

Amit Gupta
Amit Gupta
Jul 12, 2023Isovalent
Cilium on AKS using ARM templates

“At any given moment, you have the power to say: this is not how the story is going to end.” —Christine Mason Miller

With that, we resume where we left Part-1 of our blog while Deploying Isovalent Enterprise for Cilium from Azure Marketplace. In this tutorial, users will learn how to enable advanced features provided by Isovalent using Azure IaaC (Infrastructure as a Code) tools namely ARM (Azure Resource Manager) templates & Azure CLI from the Azure Marketplace.

With Azure CNI Powered by Cilium, AKS is now natively powered by Cilium. Azure CNI Powered by Cilium combines the robust control plane of Azure CNI with the dataplane of Cilium to provide high-performance networking and security. 

AKS customers will also benefit from a seamless one-click upgrade experience from Azure CNI Powered by Cilium to Isovalent Enterprise for Cilium platform. The enterprise platform is available in the Azure Container Marketplace and makes the complete Cilium feature set available to Azure customers. This includes security and governance controls, extended network capabilities, the complete set of Isovalent Enterprise features, and more!

The tight integration into the Azure platform simplifies operations by enabling auto-upgrades and natively integrating into the Azure ecosystem for SIEM export, monitoring, and governance control. The unified billing experience will eliminate management overhead. Finally, the support collaboration will maximize the reliability and customer experience of the platform.

What is Isovalent Enterprise for Cilium

Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.

Why Isovalent Enterprise for Cilium

For enterprise customers requiring support and/or usage of Advanced Networking, Security, and Observability features, “Isovalent Enterprise for Cilium” is recommended.

This offering brings complete flexibility in terms of access to Cilium features while retaining the advantageous ease of use and integration with Azure seamlessly.

Prerequisites

  • AKS Cluster is up and running with Azure CNI powered by Cilium 
  • Azure CLI version 2.41.0 or later. Run az –version to see the currently installed version. If you need to install or upgrade, see Install Azure CLI.
  • Templates are JavaScript Object Notation (JSON) files. To create templates, you need a good JSON editor. We recommend Visual Studio Code with the Azure Resource Manager Tools extension. If you need to install these tools, see Quickstart: Create ARM templates with Visual Studio Code.
  • The kubectl command line tool is installed on your device. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is 1.26, you can use kubectl version 1.25, 1.26, or 1.27 with it. To install or upgrade kubectl, see Installing or updating kubectl.

Register resource providers

Before you deploy a container offer, you must register the Microsoft.ContainerService and Microsoft.KubernetesConfiguration providers on your subscription by using the az provider register command:

az provider register --namespace Microsoft.ContainerService --wait
az provider register --namespace Microsoft.KubernetesConfiguration --wait

Deploying a Kubernetes Application 

There are three ways to deploy a Kubernetes application on an AKS cluster running Isovalent Enterprise for Cilium:

Azure ARM Template

With the move to the cloud, many teams have adopted agile development methods. These teams iterate quickly. They need to repeatedly deploy their solutions to the cloud, and know their infrastructure is in a reliable state.

To implement infrastructure as code for your Azure solutions, you can use Azure Resource Manager (ARM) templates. The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources.

To deploy a Kubernetes application programmatically through ARM templates, a user needs to select the Kubernetes application and settings, generate an ARM template, accept legal terms and conditions, and finally deploy the ARM template.

Select Kubernetes Application

First, you need to select the Kubernetes application that you want to deploy/upgrade in the Azure portal.

  • In the Azure portal, search for Marketplace on the top search bar. In the results, under Services, select Marketplace.
  • You can search for an application or publisher directly by name, or you can browse all applications. To find Kubernetes application applications, on the left side under Categories select Containers.

Generate ARM Template

  • In the search window type “Isovalent” and select the application.
  • On the Plans + Pricing tab, select an option. Ensure that the terms are acceptable, and then select Create.
  • Select the respective subscription in which the existing AKS cluster has already been created with Azure CNI powered by Cilium.
  • Select the resource group to deploy the cluster in.
  • Click on Create New Dev Cluster, select “No” and click on Next: Cluster Details.

Note- In case the user would like to create a new AKS cluster, the user needs to click on  resource group and then select the option “Create New” under Resource Group. Also, if the user would like to create a new cluster with Isovalent Enterprise for Cilium, the user needs to select the option “Yes” at the “Create new dev cluster” step.

  • The AKS cluster name will be available through the drop-down and the user needs to click on “Next: Review + Create”
  • Once Final validation is complete, click > “View Automation Template” > click “Download”
  • If all the validations have passed, you’ll see the ARM template in the editor.
  • Download the ARM template and save it to a location on your computer.
  • The user needs to note the following values which will be used later on in this tutorial
    • Plan-publisher
    • Plan-offerID and
    • Plan-name
  • The user needs to exit out of the Azure portal screen and not Click on Create in the UI workflow as ARM templates will now take over the part of installing the “Cilium” extension and enabling the Enterprise features.

Editing the json file

  • The user can unzip the ARM template file which will include a parameters.json file and edit the following fields:
    • ClusterResourceName– The description should be the name of the AKS cluster that is either being created or an existing AKS cluster running on Azure CNI powered by Cilium.
    • createNewCluster– Default value is set to false which means that there is an existing AKS cluster running on Azure CNI powered by Cilium which will be upgraded to Isovalent Enterprise for Cilium. The user can turn it to True in case they are creating a new cluster on Isovalent Enterprise for Cilium.
    • extensionResourceName– The description should be the cilium extension and should not be edited.
    • location– Description of the location of the AKS resource should be the location where the AKS resource has been created/ upgraded.

Accept terms and agreements

Before you can deploy a Kubernetes application, you need to accept its terms and agreements. To do so, use Azure CLI or Azure PowerShell. For this tutorial, we will be using Azure CLI to deploy the ARM template.

Be sure to use the values you copied prior to downloading the ARM template for plan-publisher, plan-offerID, and plan-name in your command.

az vm image terms accept --offer <Product ID> --plan <Plan ID> --publisher <Publisher ID>

Values.yaml file

  • To enable Enterprise features as we have listed above that will be enabled as a part of this tutorial, the user can create a sample yaml file as shown in the example below:
namespace: kube-system
hubble.enabled: true
hubble.relay.enabled: true
encryption.enabled: true
encryption.type: wireguard
kubeProxyReplacement: strict
k8sServicePort: <API_SERVER_PORT>
k8sServiceHost: <API_SERVER_IP>
l7Proxy : false

Setting Enterprise Values for the ARM template

  • A user can download yq and inject custom helm values into the ARM template as Azure ARM templates follow a key:value pair syntax notation.
    • Users can use a tool of their choice as well to inject helm values. 
yq -o=json '.resources[1].properties.configurationSettings |= load("./values.yaml")' template.json
  • The output of this will be a json file which will be used to deploy the ARM template. Sample json file would look like as shown below which the user has to copy.

Note– The output below is truncated.

...
      "name": "[parameters('extensionResourceName')]",
      "plan": {
        "name": "[variables('plan-name')]",
        "product": "[variables('plan-offerID')]",
        "publisher": "[variables('plan-publisher')]"
      },
      "properties": {
        "autoUpgradeMinorVersion": true,
        "configurationProtectedSettings": {},
        "configurationSettings": {
          "namespace": "kube-system",
          "hubble.enabled": true,
          "hubble.relay.enabled": true,
          "encryption.enabled"  : true,
          "encryption.type"     : "wireguard",
          "l7Proxy"             : false,
		"kubeProxyReplacement": "strict",
		"k8sServiceHost"	  : "API_SERVER_IP",
		"k8sServicePort"	  : "API_SERVER_PORT"
        },
        "extensionType": "[variables('clusterExtensionTypeName')]",
        "releaseTrain": "[variables('releaseTrain')]"
      },
      "scope": "[concat('Microsoft.ContainerService/managedClusters/', parameters('clusterResourceName'))]",
      "type": "Microsoft.KubernetesConfiguration/extensions"
    }
  ],
  "variables": {
    "clusterExtensionTypeName": "Isovalent.CiliumEnterprise.One",
    "plan-name": "isovalent-cilium-enterprise-base-edition",
    "plan-offerID": "isovalent-cilium-enterprise",
    "plan-publisher": "isovalentinc3222233121323",
    "releaseTrain": "stable"
  }
}

Note-

  • When hubble is installed, it creates pods in the kube-system namespace by default. In case you have a SIEM like Microsoft Defender, you can see an error similar to “New container ‘hubble-relay’ in the kube-system namespace detected“.
  • It is an ideal installation process for Cilium and Hubble components. With the default alerting rules, these errors are expected. Cilium and Hubble’s components use best practices to run within the kube-system namespace within the cluster.
  • Adding an exception for such events on Microsoft Defender’s alerting rules can help in mitigating the errors. For such modifications, you can get in touch with Microsoft Support. This is true for any alternative External Attack Surface Management tools.

Deploy ARM Template

  • To start working with ARM templates, sign in with your Azure credentials in Azure CLI.
az login
  • If you have multiple Azure subscriptions, choose the subscription you want to use.
    • Replace SubscriptionName with your subscription name.
    • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName
  • To deploy the template, use either Azure CLI or Azure PowerShell. Use the resource group you created while creating the cluster on Azure CNI powered by Cilium.
    • Template-File type is “json”.
templateFile="{provide-the-path-to-the-template-file}"
az deployment group create \
  --name <deployment-name> \
  --resource-group <resource-group name> \
  --template-file $templateFile
  • The deployment command returns results. Look for ProvisioningState to see whether the deployment succeeded.

Note– The output below is truncated.

...

"provisioningState": "Succeeded",
    "templateHash": "93xxxxxxxxxxxx9951",
    "templateLink": null,
    "timestamp": "2023-06-15T17:46:08.011055+00:00",
    "validatedResources": null

Verify Deployment

  • You can verify the deployment by exploring the resource group from the Azure portal.
  • Sign in to the Azure portal.
  • From the left menu, select Resource Groups.
  • Check the box to the left of the resource group in which the AKS cluster resides.
  • Select the resource group you used in the earlier procedure.

  • You can also check the extensions installed on the cluster from Azure Portal. On the AKS cluster, go to the “Extensions + applications” menu to verify the settings.
  • Optional– Log in to the Azure portal and browse to Kubernetes Services> select the respective Kubernetes service that was created ( AKS Cluster) and click on connect. This will help end users connect to their AKS cluster and set the respective Kubernetes context.
az account set --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx

az aks get-credentials --resource-group <resourcegroup-name> --name <clustername>
  • Verify the deployment by using the following command to list the extension and features that were enabled on your cluster after the successful deployment of the ARM template:
az k8s-extension show --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters -n cilium

{
  "aksAssignedIdentity": {
    "principalId": "9b9b5044-57c6-49c8-923a-0953b0cadfd8",
    "tenantId": null,
    "type": null
  },
  "autoUpgradeMinorVersion": true,
  "configurationProtectedSettings": {},
  "configurationSettings": {
    "encryption.enabled": "true",
    "encryption.type": "wireguard",
    "hubble.enabled": "true",
    "hubble.relay.enabled": "true",
    "k8sServiceHost": "testciliumextension-7aqazoma.hcp.centralindia.azmk8s.io",
    "k8sServicePort": "443",
    "kubeProxyReplacement": "strict",
    "l7Proxy": "false",
    "namespace": "kube-system"
  },
  "currentVersion": "1.0.0",
  "customLocationSettings": null,
  "errorInfo": null,
  "extensionType": "Isovalent.CiliumEnterprise.One",
  "id": "/subscriptions/###################################/resourceGroups/testciliumextension/providers/Microsoft.ContainerService/managedClusters/testciliumextension/providers/Microsoft.KubernetesConfiguration/extensions/cilium",
  "identity": null,
  "isSystemExtension": false,
  "name": "cilium",
  "packageUri": null,
  "plan": {
    "name": "isovalent-cilium-enterprise-base-edition",
    "product": "isovalent-cilium-enterprise",
    "promotionCode": null,
    "publisher": "isovalentinc3222233121323",
    "version": null
  },
  "provisioningState": "Succeeded",
  "releaseTrain": "stable",
  "resourceGroup": "testciliumextension",
  "scope": {
    "cluster": {
      "releaseNamespace": "kube-system"
    },
    "namespace": null
  },
  "statuses": [],
  "systemData": {
    "createdAt": "2023-07-12T17:32:56.255587+00:00",
    "createdBy": null,
    "createdByType": null,
    "lastModifiedAt": "2023-07-12T17:42:43.573078+00:00",
    "lastModifiedBy": null,
    "lastModifiedByType": null
  },
  "type": "Microsoft.KubernetesConfiguration/extensions",
  "version": null
}
  • Validating the version of Isovalent Enterprise for Cilium on your newly created cluster
Client: 1.12.11-cee.1 88bf67bd 2023-06-15T03:12:05+00:00 go version go1.18.10 linux/amd64
Daemon: 1.12.11-cee.1 88bf67bd 2023-06-15T03:12:05+00:00 go version go1.18.10 linux/amd64

Azure CLI

A user can create extension instances in an AKS cluster, setting required and optional parameters including options related to updates and configurations. You can also view, list, update, and delete extension instances.

Prerequisites

  • An Azure subscription.
  • While creating the extension- Users need to make sure that they have an AKS Cluster up and running with Azure CNI powered by Cilium
  • While updating the extension- Users need to make sure that they have an AKS Cluster up and running with Isovalent Enterprise for Cilium from the Azure marketplace
  • The Microsoft.ContainerService and Microsoft.KubernetesConfiguration resource providers must be registered on your subscription. 
  • To register these providers, run the following command:
az provider register --namespace Microsoft.ContainerService --wait 

az provider register --namespace Microsoft.KubernetesConfiguration --wait
  • The latest version of the k8s-extension Azure CLI extensions. Install the extension by running the following command:
az extension add --name k8s-extension
  • If the extension is already installed, make sure you’re running the latest version by using the following command:
az extension update --name k8s-extension

Select the Kubernetes application

Users need to select the Kubernetes application that they want to deploy in the Azure portal. Users will also need to copy some of the details for later use.

  • In the Azure portal, go to the Marketplace page.
  • Select the Kubernetes application.
  • Select the required plan.
  • Select the Create button.
  • Fill out all the application (extension) details.
  • In the Review + Create tab, select Download a template for automation. If all the validations are passed, you’ll see the ARM template in the editor.
  • Examine the ARM template:
    • In the variables section, copy the plan-name, plan-publisher, plan-offerID, and clusterExtensionTypeName values for later use.
    • In the resource Microsoft.KubernetesConfiguration/extensions‘ section, copy the configurationSettings section for later use.

Accept terms and agreements

  • Before you can deploy a Kubernetes application, you need to accept its terms and agreements. To do so, use Azure CLI or Azure PowerShell. In this section, we will be using Azure CLI to deploy the ARM template.
  • Be sure to use the values you copied prior to downloading the ARM template for plan-publisher, plan-offerID, and plan-name in your command.
az vm image terms accept --offer <Product ID> --plan <Plan ID> --publisher <Publisher ID>

Setting the correct Context

Log in to the Azure portal and browse to Kubernetes Services> select the respective Kubernetes service that was created ( AKS Cluster) and click on connect. This will help end users connect to their AKS cluster and set the respective Kubernetes context.

az account set --subscription xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx

az aks get-credentials --resource-group <resourcegroup-name> --name <clustername>

Creating the extension

Users can create a new extension instance with k8s-extension create, passing in values for the mandatory parameters.

az k8s-extension create --name cilium --extension-type Isovalent.CiliumEnterprise.One --scope cluster --cluster-name ciliumossazmktplace  --resource-group ciliumossazmktplace --cluster-type managedClusters --plan-name isovalent-cilium-enterprise-base-edition --plan-product isovalent-cilium-enterprise --plan-publisher isovalentinc3222233121323

Update the extension

Users can update an existing extension (Cilium) installed on an AKS cluster with Isovalent Enterprise for Cilium.

  • Listing an existing extension
az k8s-extension show -c <cluster-name> -t managedClusters -g <resource-group> -n cilium
  • Updating k8s-extension and enabling the enterprise features.
    • Users will be prompted for a prompt while enabling these features and the user should select Yes.
az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings namespace=kube-system 

az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings hubble.enabled=true 

az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings hubble.relay.enabled=true

az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings encryption.enabled=true

az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings encryption.type=wireguard

az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings l7Proxy=false

az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings kubeProxyReplacement=strict

az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings k8sServicePort=<API_SERVER_PORT>

az k8s-extension update -c <cluster-name> -t managedClusters -g <resource-group> -n cilium --configuration-settings k8sServiceHost=<API_SERVER_IP>

Verifying the extension

Verify the k8s-extension by using the following command to list the extension and features that were enabled on your cluster using Azure CLI.

az k8s-extension show -c <cluster-name> -t managedClusters -g <resource-group> -n cilium

{
  "aksAssignedIdentity": {
    "principalId": "9b9b5044-57c6-49c8-923a-0953b0cadfd8",
    "tenantId": null,
    "type": null
  },
  "autoUpgradeMinorVersion": true,
  "configurationProtectedSettings": {},
  "configurationSettings": {
    "encryption.enabled": "true",
    "encryption.type": "wireguard",
    "hubble.enabled": "true",
    "hubble.relay.enabled": "true",
    "k8sServiceHost": "testciliumextension-7aqazoma.hcp.centralindia.azmk8s.io",
    "k8sServicePort": "443",
    "kubeProxyReplacement": "strict",
    "l7Proxy": "false",
    "namespace": "kube-system"
  },
  "currentVersion": "1.0.0",
  "customLocationSettings": null,
  "errorInfo": null,
  "extensionType": "Isovalent.CiliumEnterprise.One",
  "id": "/subscriptions/###################################/resourceGroups/testciliumextension/providers/Microsoft.ContainerService/managedClusters/testciliumextension/providers/Microsoft.KubernetesConfiguration/extensions/cilium",
  "identity": null,
  "isSystemExtension": false,
  "name": "cilium",
  "packageUri": null,
  "plan": {
    "name": "isovalent-cilium-enterprise-base-edition",
    "product": "isovalent-cilium-enterprise",
    "promotionCode": null,
    "publisher": "isovalentinc3222233121323",
    "version": null
  },
  "provisioningState": "Succeeded",
  "releaseTrain": "stable",
  "resourceGroup": "testciliumextension",
  "scope": {
    "cluster": {
      "releaseNamespace": "kube-system"
    },
    "namespace": null
  },
  "statuses": [],
  "systemData": {
    "createdAt": "2023-07-12T17:32:56.255587+00:00",
    "createdBy": null,
    "createdByType": null,
    "lastModifiedAt": "2023-07-12T17:42:43.573078+00:00",
    "lastModifiedBy": null,
    "lastModifiedByType": null
  },
  "type": "Microsoft.KubernetesConfiguration/extensions",
  "version": null
}

Benefits of Isovalent Enterprise for Cilium

Isovalent Enterprise provides a range of advanced enterprise features that we will demonstrate in this tutorial.

Note-

  • Features described below have been enabled using an ARM template or Azure CLI. 
  • Users can get in touch with their partner Sales/SE representative(s) at sales@isovalent.com for more detailed insights into the below-explained features and get access to the requisite documentation and hubble CLI software images.
  • Features such as Ingress and IPsec Encryption will be available shortly in an upcoming release.

1. Layer 3/ Layer4 Policy

When using Cilium, endpoint IP addresses are irrelevant when defining security policies. Instead, you can use the labels assigned to the pods to define security policies. The policies will be applied to the right pods based on the labels irrespective of where or when it is running within the cluster.

The layer 3 policy establishes the base connectivity rules regarding which endpoints can talk to each other. 

Layer 4 policy can be specified in addition to layer 3 policies or independently. It restricts the ability of an endpoint to emit and/or receive packets on a particular port using a particular protocol.

Users can take the example of a Star Wars-inspired example, in which there are three microservices applications: deathstar, tiefighter, and xwing. The deathstar runs an HTTP web service on port 80, which is exposed as a Kubernetes Service to load-balance requests to deathstar across two pod replicas. The deathstar service provides landing services to the empire’s spaceships so that they can request a landing port. The tiefighter pod represents a landing-request client service on a typical empire ship and xwing represents a similar service on an alliance ship. They exist so that we can test different security policies for access control to deathstar landing services.

Validate L3/ L4 Policies

  • Deploy three services deathstar, xwing and firefighter
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.13/examples/minikube/http-sw-app.yaml
service/deathstar created
deployment.apps/deathstar created
pod/tiefighter created
pod/xwing created
  • Kubernetes will deploy the pods and service in the background.
  • Running kubectl get pods,svc will inform you about the progress of the operation.
kubectl get pods,svc
NAME                             READY   STATUS    RESTARTS   AGE
pod/client                       1/1     Running   0          23h
pod/deathstar-54bb8475cc-4gcv4   1/1     Running   0          3m9s
pod/deathstar-54bb8475cc-lq6sv   1/1     Running   0          3m9s
pod/server                       1/1     Running   0          23h
pod/tiefighter                   1/1     Running   0          3m9s
pod/xwing                        1/1     Running   0          3m9s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/deathstar    ClusterIP   10.0.114.36    <none>        80/TCP    3m9s
service/kubernetes   ClusterIP   10.0.0.1       <none>        443/TCP   4d1h
  • Check basic access
    • From the perspective of the deathstar service, only the ships with label org=empire are allowed to connect and request landing. Since we have no rules enforced, both xwing and tiefighter will be able to request landing.
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
  • We’ll start with the basic policy restricting deathstar landing requests to only the ships that have a label org=empire. This will not allow any ships that don’t have the org=empire label to even connect with the deathstar service. This is a simple policy that filters only on IP protocol (network layer 3) and TCP protocol (network layer 4), so it is often referred to as an L3/L4 network security policy.
  • The above policy whitelists traffic sent from any pods with label org=empire to deathstar pods with label org=empire, class=deathstar on TCP port 80.
  • Users can now apply this L3/L4 policy:
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.13/examples/minikube/sw_l3_l4_policy.yaml
ciliumnetworkpolicy.cilium.io/rule1 created
  • Now if we run the landing requests, only the tiefighter pods with the label org=empire will succeed. The xwing pods will be blocked.
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
  • Now the same request run from an xwing pod will fail:
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

This request will hang, so press Control-C to kill the curl request, or wait for it to time out.

2. HTTP-aware L7 Policy

Layer 7 policy rules are embedded into Layer 4 Examples rules and can be specified for ingress and egress. A layer 7 request is permitted if at least one of the rules matches. If no rules are specified, then all traffic is permitted. If a layer 4 rule is specified in the policy, and a similar layer 4 rule with layer 7 rules is also specified, then the layer 7 portions of the latter rule will have no effect.

Note-

  • For wireguard encryption, l7Proxy is set to False and hence it is recommended that users should enable the same by updating the ARM template or via Azure CLI. 
  • This will be available in an upcoming release.

In order to provide the strongest security (i.e., enforce least-privilege isolation) between microservices, each service that calls deathstar’s API should be limited to making only the set of HTTP requests it requires for legitimate operation.

For example, consider that the deathstar service exposes some maintenance APIs which should not be called by random empire ships.

kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Panic: deathstar exploded

goroutine 1 [running]:
main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)
        /code/src/github.com/empire/deathstar/
        temp/main.go:9 +0x64
main.main()
        /code/src/github.com/empire/deathstar/
        temp/main.go:5 +0x85

Cilium is capable of enforcing HTTP-layer (i.e., L7) policies to limit what URLs the firefighter pod is allowed to reach. 

Validate L7 Policy

  • Apply L7 Policy- Here is an example policy file that extends our original policy by limiting tiefighter to making only a POST /v1/request-landing API call, but disallowing all other calls (including PUT /v1/exhaust-port).
  • Update the existing rule (from the L3/L4 section) to apply L7-aware policy to protect deathstar
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.13/examples/minikube/sw_l3_l4_l7_policy.yaml
ciliumnetworkpolicy.cilium.io/rule1 configured
  • Users can re-run a curl towards deathstar & exhaust-port
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Access denied
  • As this rule builds on the identity-aware rule, traffic from pods without the label org=empire will continue to be dropped causing the connection to time out:
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

3. DNS-Based Policies

DNS-based policies are very useful for controlling access to services running outside the Kubernetes cluster. DNS acts as a persistent service identifier for both external services provided by Google, etc., and internal services such as database clusters running in private subnets outside Kubernetes. CIDR or IP-based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently. The Cilium DNS-based policies provide an easy mechanism to specify access control while Cilium manages the harder aspects of tracking DNS to IP mapping.

Validate DNS-Based Policies

  • In line with our Star Wars theme examples, we will use a simple scenario where the Empire’s mediabot pods need access to GitHub for managing the Empire’s git repositories. The pods shouldn’t have access to any other external service.
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.13.4/examples/kubernetes-dns/dns-sw-app.yaml
pod/mediabot created
  • Apply DNS Egress Policy- The following Cilium network policy allows mediabot pods to only access api.github.com
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.13.4/examples/kubernetes-dns/dns-matchname.yaml
ciliumnetworkpolicy.cilium.io/fqdn created
  • Testing the policy, we see that mediabot has access to api.github.com but doesn’t have access to any other external service, e.g., support.github.com
kubectl exec mediabot -- curl -I -s https://api.github.com | head -1
HTTP/1.1 200 OK

kubectl exec mediabot -- curl -I -s https://support.github.com | head -1
curl: (28) Connection timed out after 5000 milliseconds
command terminated with exit code 28

This request will hang, so press Control-C to kill the curl request, or wait for it to time out.

4. Combining DNS, Port, and L7 Rules

The DNS-based policies can be combined with port (L4) and API (L7) rules to further restrict access. In our example, we will restrict mediabot pods to access GitHub services only on port 443. The toPorts section in the policy below achieves the port-based restrictions along with the DNS-based policies.

Validate the combination of DNS, Port and L7-based Rules

  • Applying the policy
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.13.4/examples/kubernetes-dns/dns-port.yaml
ciliumnetworkpolicy.cilium.io/fqdn configured
  • Testing, the access to https://support.github.com on port 443 will succeed but the access to http://support.github.com on port 80 will be denied.
kubectl exec mediabot -- curl -I -s https://support.github.com | head -1
HTTP/1.1 200 OK

kubectl exec mediabot -- curl -I -s --max-time 5 http://support.github.com | head -1
command terminated with exit code 28

5. Observing Network Flows with Hubble CLI (modern-day Wireshark)

Hubble’s CLI extends the visibility that is provided by standard kubectl commands like kubectl get pods to give you more network-level details about a request, such as its status and the security identities associated with its source and destination.

The Hubble CLI can be leveraged for observing network flows from Cilium agents. Users can observe the flows from their local machine workstation for troubleshooting or monitoring. For this tutorial, users can see that all hubble outputs are related to the tests that are done above. Users can try other tests and see the same results with different varying values as expected.

Setup Hubble Relay Forwarding

Users can use kubectl port forward to hubble-relay, then edit the hubble config to point at the remote hubble server component.

kubectl port-forward -n kube-system svc/hubble-relay --address 0.0.0.0 4245:80

Hubble Status

Hubble status can check the overall health of Hubble within your cluster. If using Hubble Relay, a counter for the number of connected nodes will appear in the last line of the output.

hubble status 
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8,190/8,190 (100.00%)
Flows/s: 30.89
Connected Nodes: 2/2

View Last N Events

Hubble observe displays the most recent events based on the number filter. Hubble Relay will display events over all the connected nodes:

hubble observe --last 5

Jun 26 07:59:01.759: 10.10.0.5:37668 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jun 26 07:59:01.759: 10.10.0.5:37666 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jun 26 07:59:01.759: 10.10.0.5:37668 (host) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-stack FORWARDED (TCP Flags: ACK, FIN)
Jun 26 07:59:01.759: 10.10.0.5:37668 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK)
Jun 26 07:59:01.759: 10.10.0.5:37666 (host) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-stack FORWARDED (TCP Flags: ACK, FIN)

Follow Events in Real-Time

Hubble observe --follow will follow the event stream for all connected clusters.

hubble observe --follow

Jun 26 08:09:47.938: 10.10.0.4:55976 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jun 26 08:09:47.938: default/tiefighter:54512 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jun 26 08:09:47.938: default/tiefighter:54512 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jun 26 08:09:47.938: default/tiefighter:54512 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))

Troubleshooting HTTP & DNS

  • If a user has a CiliumNetworkPolicy that enforces DNS or HTTP policy, we can use the –type l7 filtering options for hubble to check the HTTP methods and DNS resolution attempts of our applications.
hubble observe --since 1m -t l7

Jun 26 08:15:17.888: default/tiefighter:46930 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-request FORWARDED (HTTP/1.1 POST http://deathstar.default.svc.cluster.local/v1/request-landing)
Jun 26 08:15:17.888: default/tiefighter:46930 (ID:5211) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-response FORWARDED (HTTP/1.1 200 0ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing))
Jun 26 08:15:18.384: default/tiefighter:46932 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jun 26 08:15:18.384: default/tiefighter:46932 (ID:5211) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))
  • Users can use --http-status to view specific flows with 200 HTTP responses
hubble observe --http-status 200

Jun 26 08:18:00.885: default/tiefighter:53064 (ID:5211) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-response FORWARDED (HTTP/1.1 200 0ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing))
Jun 26 08:18:07.510: default/tiefighter:43448 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-response FORWARDED (HTTP/1.1 200 1ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing))
  • Users can also just show HTTP PUT methods with --http-method
hubble observe --http-method PUT

Jun 26 08:19:51.270: default/tiefighter:55354 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jun 26 08:19:51.270: default/tiefighter:55354 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))
  • To view DNS traffic for a specific FQDN, users can use the --to-fqdn flag
hubble observe --to-fqdn "*.github.com"

Jun 26 10:34:34.196: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) policy-verdict:all EGRESS ALLOWED (TCP Flags: SYN)
Jun 26 10:34:34.196: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) to-stack FORWARDED (TCP Flags: SYN)
Jun 26 10:34:34.198: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) to-stack FORWARDED (TCP Flags: ACK)
Jun 26 10:34:34.198: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jun 26 10:34:34.200: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) to-stack FORWARDED (TCP Flags: ACK, FIN)

Filter by Verdict

Hubble provides a field called VERDICT that displays one of FORWARDED, ERROR, or DROPPED for each flow. DROPPED could indicate an unsupported protocol within the underlying platform or Network Policy enforcing pod communication. Hubble is able to introspect the reason for ERROR or DROPPED flows and display the reason within the TYPE field of each flow.

hubble observe --output table --verdict DROPPED

Jun 26 08:18:59.517   default/tiefighter:37562   default/deathstar-54bb8475cc-dp646:80   http-request   DROPPED   HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port
Jun 26 08:19:12.451   default/tiefighter:35394   default/deathstar-54bb8475cc-dp646:80   http-request   DROPPED   HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port

Filter by Pod or Namespace

  • To show all flows for a specific pod, users can filter with the --pod flag
hubble observe --from-pod default/server

Jun 26 08:25:00.001: default/client:36732 (ID:23611) <- default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jun 26 08:25:00.001: default/client:36732 (ID:23611) <- default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK, FIN, PSH)
Jun 26 08:25:00.001: default/client:36732 (ID:23611) <- default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: ACK, FIN, PSH)
  • If users are only interested in traffic from a pod to a specific destination, we combine --from-pod and --to-pod
hubble observe --from-pod default/client --to-pod default/server

Jun 26 08:26:38.290: default/client:41968 (ID:23611) -> default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: SYN)
Jun 26 08:26:38.290: default/client:41968 (ID:23611) -> default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: SYN)
Jun 26 08:26:38.291: default/client:41968 (ID:23611) -> default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK)
Jun 26 08:26:38.291: default/client:41968 (ID:23611) -> default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: ACK)
  • If users want to see all traffic from a specific namespace, we specify the --from-namespace
hubble observe --from-namespace default

Jun 26 08:28:18.591: default/client:55870 (ID:23611) -> default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jun 26 08:28:18.592: default/client:55870 (ID:23611) <- default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK, FIN, PSH)
Jun 26 08:28:18.592: default/client:55870 (ID:23611) <- default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jun 26 08:28:18.592: default/client:55870 (ID:23611) <- default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: ACK, FIN, PSH)
Jun 26 08:28:19.029: default/tiefighter:60802 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-stack FORWARDED (TCP Flags: SYN)
Jun 26 08:28:19.030: default/tiefighter:60802 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) policy-verdict:L3-L4 INGRESS ALLOWED (TCP Flags: SYN)
Jun 26 08:28:19.030: default/tiefighter:60802 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-proxy FORWARDED (TCP Flags: SYN)
Jun 26 08:28:19.030: default/tiefighter:60802 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jun 26 08:28:19.032: default/tiefighter:60802 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jun 26 08:28:19.032: default/tiefighter:60802 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))

Filter Events with JQ

To view filter events through the jq tool we can swap the output to json mode. Then when we visualize our metadata through jq, we see more metadata around the workload labels like pod name/namespace assigned to both source and destination. This information is accessible by Cilium because it is encoded in the packets based on pod identities.

hubble observe --output json | jq . | head -n 50

{
  "flow": {
    "time": "2023-06-26T08:30:06.145430409Z",
    "verdict": "FORWARDED",
    "ethernet": {
      "source": "76:2f:51:6e:8e:b4",
      "destination": "da:f3:2b:fc:25:fe"
    },
    "IP": {
      "source": "10.10.0.4",
      "destination": "192.168.0.241",
      "ipVersion": "IPv4"
    },
    "l4": {
      "TCP": {
        "source_port": 56198,
        "destination_port": 8080,
        "flags": {
          "SYN": true
        }
      }
    },
    "source": {
      "identity": 1,
      "labels": [
        "reserved:host"
      ]
    },
    "destination": {
      "ID": 60,
      "identity": 1008,
      "namespace": "kube-system",
      "labels": [
        "k8s:io.cilium.k8s.namespace.labels.addonmanager.kubernetes.io/mode=Reconcile",
        "k8s:io.cilium.k8s.namespace.labels.control-plane=true",
        "k8s:io.cilium.k8s.namespace.labels.kubernetes.io/cluster-service=true",
        "k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system",
        "k8s:io.cilium.k8s.policy.cluster=default",
        "k8s:io.cilium.k8s.policy.serviceaccount=coredns-autoscaler",
        "k8s:io.kubernetes.pod.namespace=kube-system",
        "k8s:k8s-app=coredns-autoscaler",
        "k8s:kubernetes.azure.com/managedby=aks"
      ],
      "pod_name": "coredns-autoscaler-69b7556b86-wrkqx",
      "workloads": [
        {
          "name": "coredns-autoscaler",
          "kind": "Deployment"
        }
      ]

6. Encryption

Cilium supports the transparent encryption of Cilium-managed host traffic and traffic between Cilium-managed endpoints either using WireGuard® or IPsec. In this tutorial, we will be talking about Wireguard.

Note-

For wireguard encryption, l7Proxy is set to False and hence it is recommended that users should disable the same by updating the ARM template or via Azure CLI.

Wireguard

When WireGuard is enabled in Cilium, the agent running on each cluster node will establish a secure WireGuard tunnel between it and all other known nodes in the cluster.

Packets are not encrypted when they are destined to the same node from which they were sent. This behavior is intended. Encryption would provide no benefits in that case, given that the raw traffic can be observed on the node anyway.

Validate Wireguard Encryption

  • To demonstrate Wireguard encryption, users can create a client pod that is spun up on one node and a server pod that is spun up on another node in AKS.
    • The client is doing a “wget” towards the server every 2 seconds.
kubectl get pods -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP                   NODE                                NOMINATED NODE   READINESS GATES
client   1/1     Running   0         4s    192.168.1.30   aks-nodepool1-18458950-vmss000000   <none>           <none>
server   1/1     Running   0       16h   192.168.0.38   aks-nodepool1-18458950-vmss000001   <none>           <none>
  • Run a bash shell in one of the Cilium pods with kubectl -n kube-system exec -ti ds/cilium -- bash and execute the following commands:
    • Check that WireGuard has been enabled (number of peers should correspond to a number of nodes subtracted by one):
kubectl -n kube-system exec -it cilium-mgscb -- cilium status | grep Encryption
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init), block-wireserver (init)
Encryption: Wireguard [cilium_wg0 (Pubkey: ###########################################, Port: 51871, Peers: 1)]

kubectl -n kube-system exec -it cilium-vr497 -- cilium status | grep Encryption
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init), block-wireserver (init)
Encryption: Wireguard [cilium_wg0 (Pubkey: ###########################################, Port: 51871, Peers: 1)]
  • Install tcpdump on the node where the server pod has been created.
apt-get update
apt-get -y install tcpdump
  • Check that traffic (HTTP requests and responses) is sent via the cilium_wg0 tunnel device on the node where the server pod has been created:
tcpdump -n -i cilium_wg0

07:04:23.294242 IP 192.168.1.30.40170 > 192.168.0.38.80: Flags [P.], seq 1:70, ack 1, win 507, options [nop,nop,TS val 1189809356 ecr 3600600803], length 69: HTTP: GET / HTTP/1.1
07:04:23.294301 IP 192.168.0.38.80 > 192.168.1.30.40170: Flags [.], ack 70, win 502, options [nop,nop,TS val 3600600803 ecr 1189809356], length 0
07:04:23.294568 IP 192.168.0.38.80 > 192.168.1.30.40170: Flags [P.], seq 1:234, ack 70, win 502, options [nop,nop,TS val 3600600803 ecr 1189809356], length 233: HTTP: HTTP/1.1 200 OK
07:04:23.294747 IP 192.168.1.30.40170 > 192.168.0.38.80: Flags [.], ack 234, win 506, options [nop,nop,TS val 1189809356 ecr 3600600803], length 0

7. Kube-Proxy Replacement

One of the additional benefits of using Cilium is its extremely efficient data plane. It’s particularly useful at scale, as the standard kube-proxy is based on a technology – iptables – that was never designed with the churn and the scale of large Kubernetes clusters.

Validate Kube-Proxy Replacement

  • Users can first validate that the Cilium agent is running in the desired mode with kube-proxy set to Strict:
kubectl exec -it -n kube-system cilium-cfrng -- cilium status --verbose
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init), block-wireserver (init)
KVStore:                Ok   Disabled
Kubernetes:             Ok   1.25 (v1.25.6) [linux/amd64]
Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   Strict   [eth0 10.10.0.4 (Direct Routing)]
  • Users can deploy nginx pods, create a new NodePort service and validate that Cilium installed the service correctly.
  • The following yaml is used for the backend pods:
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 50
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
  • Verify that the NGINX pods are up and running: 
kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP              NODE                                NOMINATED NODE   READINESS GATES
my-nginx-77d5cb496b-69wtt   1/1     Running   0          46s   192.168.0.100   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-6nh8d   1/1     Running   0          46s   192.168.1.171   aks-nodepool1-21972290-vmss000001   <none>           <none>
my-nginx-77d5cb496b-h9mxv   1/1     Running   0          46s   192.168.0.182   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-hnl6j   1/1     Running   0          46s   192.168.1.63    aks-nodepool1-21972290-vmss000001   <none>           <none>
my-nginx-77d5cb496b-mtnm9   1/1     Running   0          46s   192.168.0.170   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-pgvzj   1/1     Running   0          46s   192.168.0.237   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-rhx9q   1/1     Running   0          46s   192.168.1.247   aks-nodepool1-21972290-vmss000001   <none>           <none>
my-nginx-77d5cb496b-w65kj   1/1     Running   0          46s   192.168.0.138   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-xr96h   1/1     Running   0          46s   192.168.1.152   aks-nodepool1-21972290-vmss000001   <none>           <none>
my-nginx-77d5cb496b-zcwk5   1/1     Running   0          46s   192.168.1.75    aks-nodepool1-21972290-vmss000001   <none>           <none>
  • Users can create a NodePort service for the instances:
kubectl expose deployment my-nginx --type=NodePort --port=80
  • Users can verify that the NodePort service has been created:
kubectl get svc my-nginx
NAME       TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
my-nginx   NodePort   10.0.87.130   <none>        80:31076/TCP   2s
  • With the help of the cilium service list command, we can validate that Cilium’s eBPF kube-proxy replacement created the new NodePort services under port 31076: (Truncated O/P)
9    10.0.87.130:80     ClusterIP      1 => 192.168.0.170:80 (active)
                                       2 => 192.168.1.152:80 (active)

10   10.10.0.5:31076    NodePort       1 => 192.168.0.170:80 (active)
                                       2 => 192.168.1.152:80 (active)

11   0.0.0.0:31076      NodePort       1 => 192.168.0.170:80 (active)
                                       2 => 192.168.1.152:80 (active)
  • At the same time users can verify, using iptables in the host namespace (on the node), that no iptables rule for the service is present: 
iptables-save | grep KUBE-SVC
  • Last but not least, a simple curl test shows connectivity for the exposed NodePort port 31076 as well as for the ClusterIP: 
curl 127.0.0.1:31076 -I
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 27 Jun 2023 13:31:00 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"
Accept-Ranges: bytes

curl 10.0.87.130 -I
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 27 Jun 2023 13:31:31 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"
Accept-Ranges: bytes

curl 10.10.0.5:31076 -I
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 27 Jun 2023 13:31:47 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"
Accept-Ranges: bytes

Conclusion

Hopefully, this post gave you a good overview of how you would enable advanced features like Layer-3, 4 & 7 policies, DNS-based policies, and observing the Network Flows using Hubble-CLI with Isovalent Enterprise for Cilium in the Azure marketplace.

If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.

Further Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Related

Blogs

Cilium on AKS in Azure Marketplace

In this tutorial, users will learn how to deploy Isovalent Enterprise for Cilium on your AKS cluster from Azure Marketplace on a new cluster and also upgrade an existing cluster from an AKS cluster running Azure CNI powered by Cilium to Isovalent Enterprise for Cilium.

Amit Gupta
Amit Gupta

Industry insights you won’t delete. Delivered to your inbox weekly.