Back to blog

Cilium and Azure Arc: Solving the Multi-Cloud Cluster Manageability Conundrum

Amit Gupta
Amit Gupta
Published: Updated: Isovalent
Cilium and Azure Arc: Solving the Multi-Cloud Cluster Manageability Conundrum

Cilium’s popularity as a CNI has been such that users have enabled it not just on cloud providers but have also chosen Cilium as the default CNI in their bare-metal and sandbox environments. On the data plane side, Cilium offers breakthrough features and enhancements. Still, on the control plane side, it becomes imperative that a solution is available wherein users can manage their infrastructure with a single pane of glass. Azure Arc combined with Isovalent Enterprise for Cilium provides a combined view for on-premise and cloud providers. This tutorial teaches you to manage multiple Kubernetes clusters running Isovalent Enterprise for Cilium with Azure Arc.

What is Isovalent Enterprise for Cilium?

Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.

What is Azure Arc?

Azure Arc-enabled Kubernetes allows you to attach Kubernetes clusters running anywhere so that you can manage and configure them in Azure. By managing your Kubernetes resources in a single control plane, you can enable a more consistent development and operation experience to run cloud-native apps anywhere and on any Kubernetes platform. Salient features of Azure Arc include:

  • Azure Arc-enabled Kubernetes provides a centralized, consistent control plane to manage policy, governance, and security across Kubernetes clusters in different environments.
  • When the Azure Arc agents are deployed to the cluster, an outbound connection to Azure is initiated, using industry-standard SSL to secure data in transit.
  • Once clusters are connected to Azure, they’re represented as their resources in Azure Resource Manager, and they can be organized using resource groups and tagging.

Why Isovalent Enterprise for Cilium and Azure Arc?

You get the best out of both offerings with Isovalent Enterprise for Cilium and Azure Arc.

  • Advanced network policy: Isovalent Cilium Enterprise provides advanced network policy capabilities, including DNS-aware policy, L7 policy, and deny policy, enabling fine-grained control over network traffic for micro-segmentation and improved security.
  • Hubble flow observability + User Interface: Isovalent Cilium Enterprise Hubble observability feature provides real-time network traffic flow, policy visualization, and a powerful User Interface for easy troubleshooting and network management.
  • Multi-cluster connectivity via Cluster Mesh: Isovalent Cilium Enterprise provides seamless networking and security across multiple clouds, including public cloud providers like AWS, Azure, and Google Cloud Platform, as well as on-premises environments.
  • Advanced Security Capabilities via Tetragon: Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats. Built on eBPF, Tetragon can easily scale to meet the needs of the most demanding cloud-native environments.
  • Service Mesh: Isovalent Cilium Enterprise provides sidecar-free, seamless service-to-service communication and advanced load balancing, making it easy to deploy and manage complex microservices architectures.
  • Enterprise-grade support: Isovalent Cilium Enterprise includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that any issues are resolved promptly and efficiently. Additionally, professional services help organizations deploy and manage Cilium in production environments.

How can you deploy Isovalent Enterprise for Cilium?

In this tutorial, you will install Isovalent Enterprise for Cilium on Kubernetes clusters using Helm. To obtain the helm values to install Isovalent Enterprise for Cilium and access the Enterprise documentation, you need to reach out to sales@isovalent.com and support@isovalent.com

Pre-Requisites

The following prerequisites must be considered before you proceed with this tutorial.

  • Azure CLI version 2.48.1 or later. Run az –version to see the currently installed version. If you need to install or upgrade, see Install Azure CLI.
  • You should have an Azure Subscription.
  • To enable Azure Arc on your Kubernetes clusters, you can install the following dependencies on your development machine or create a VM in the respective environment.
  • An up-and-running Kubernetes cluster. If you don’t have one, you can create a cluster using one of these options:
  • At least 850 MB free for the Arc agents that will be deployed on the cluster, and the capacity to use approximately 7% of a single CPU.
  • The latest version of connectedk8s Azure CLI extension is installed by running the following command:
az extension add --name connectedk8s

Azure Arc Requirements

You need to take care of specific requirements for your Kubernetes cluster to work with Azure Arc:

Create Kubernetes clusters

You can install Azure Arc on any Kubernetes cluster distribution as it applies to your use case. These clusters can be created from your local machine or a VM in the respective resource group/VPC/VNet in the respective cloud distribution. In this tutorial, we will be installing Azure Arc on the following distributions:

  • AKS
  • EKS
  • EKS-Anywhere
  • GKE
  • Kind
  • k3s
  • Follow the prerequisites to create an AKS cluster. Brief steps are outlined below.
    • clusterName="nwpluginazurecnioverlay"
    • resourceGroup="nwpluginazurecnioverlay"
    • location="westcentralus"
    • az group create --name nwpluginazurecnioverlay --location westcentralus
    • az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
    • az aks get-credentials --resource-group nwpluginazurecnioverlay --name nwpluginazurecnioverlay
  • Check that all the pods are up and running.
kubectl get pods -A -o wide

NAMESPACE     NAME                                          READY   STATUS    RESTARTS       AGE    IP             NODE                                  NOMINATED NODE   READINESS GATES
azure-arc     cluster-metadata-operator-55d7754dcb-r5n7g    2/2     Running   0              121m   10.0.0.175     aks-azurecilium-29703265-vmss000000   <none>           <none>
azure-arc     clusterconnect-agent-55cff6cb6c-fm2r5         3/3     Running   1 (121m ago)   121m   10.0.0.204     aks-azurecilium-29703265-vmss000000   <none>           <none>
azure-arc     clusteridentityoperator-5b57955cf-mvdbr       2/2     Running   0              121m   10.0.1.211     aks-azurecilium-29703265-vmss000001   <none>           <none>
azure-arc     config-agent-55f49f9c6b-z76d6                 2/2     Running   0              121m   10.0.0.33      aks-azurecilium-29703265-vmss000000   <none>           <none>
azure-arc     controller-manager-84bc58855d-9g2k6           2/2     Running   0              121m   10.0.0.26      aks-azurecilium-29703265-vmss000000   <none>           <none>
azure-arc     extension-events-collector-68fb565d69-w9dgv   2/2     Running   0              121m   10.0.0.120     aks-azurecilium-29703265-vmss000000   <none>           <none>
azure-arc     extension-manager-5f5f94cc75-q6kfs            3/3     Running   0              121m   10.0.1.136     aks-azurecilium-29703265-vmss000001   <none>           <none>
azure-arc     flux-logs-agent-865c554867-vjtlj              1/1     Running   0              121m   10.0.0.137     aks-azurecilium-29703265-vmss000000   <none>           <none>
azure-arc     kube-aad-proxy-5bd88bfc5b-fsdgd               2/2     Running   0              121m   10.0.1.121     aks-azurecilium-29703265-vmss000001   <none>           <none>
azure-arc     logcollector-769866f87b-dqwn6                 1/1     Running   0              121m   10.0.0.220     aks-azurecilium-29703265-vmss000000   <none>           <none>
azure-arc     metrics-agent-7597589fcd-clmlz                2/2     Running   0              121m   10.0.0.246     aks-azurecilium-29703265-vmss000000   <none>           <none>
azure-arc     resource-sync-agent-8cff88d7b-qjg5m           2/2     Running   0              121m   10.0.0.38      aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   azure-cns-9f6cj                               1/1     Running   0              149m   192.168.10.5   aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   azure-cns-hptpx                               1/1     Running   0              149m   192.168.10.4   aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   azure-ip-masq-agent-kffhl                     1/1     Running   0              149m   192.168.10.4   aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   azure-ip-masq-agent-wpfnq                     1/1     Running   0              149m   192.168.10.5   aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   cilium-dszfg                                  1/1     Running   0              145m   192.168.10.5   aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   cilium-operator-6b8448d64d-f42zr              1/1     Running   0              145m   192.168.10.5   aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   cilium-operator-6b8448d64d-zndqm              1/1     Running   0              145m   192.168.10.4   aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   cilium-qzjn4                                  1/1     Running   0              145m   192.168.10.4   aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   cloud-node-manager-8cl2f                      1/1     Running   0              149m   192.168.10.4   aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   cloud-node-manager-m8dqt                      1/1     Running   0              149m   192.168.10.5   aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   coredns-789789675-mdzmd                       1/1     Running   0              144m   10.0.1.89      aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   coredns-789789675-xngd2                       1/1     Running   0              144m   10.0.0.76      aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   coredns-autoscaler-649b947bbd-97v8g           1/1     Running   0              143m   10.0.1.248     aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   csi-azuredisk-node-dphtk                      3/3     Running   0              149m   192.168.10.4   aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   csi-azuredisk-node-hl77j                      3/3     Running   0              149m   192.168.10.5   aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   csi-azurefile-node-krj2x                      3/3     Running   0              149m   192.168.10.4   aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   csi-azurefile-node-nwzrn                      3/3     Running   0              149m   192.168.10.5   aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   konnectivity-agent-5c44d98d75-dqkjk           1/1     Running   0              113m   10.0.1.170     aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   konnectivity-agent-5c44d98d75-qp6bg           1/1     Running   0              113m   10.0.0.253     aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   kube-proxy-s2dgh                              1/1     Running   0              149m   192.168.10.5   aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   kube-proxy-zzb6m                              1/1     Running   0              149m   192.168.10.4   aks-azurecilium-29703265-vmss000001   <none>           <none>
kube-system   metrics-server-5955767688-29kbq               2/2     Running   0              143m   10.0.0.142     aks-azurecilium-29703265-vmss000000   <none>           <none>
kube-system   metrics-server-5955767688-g9htc               2/2     Running   0              143m   10.0.1.99      aks-azurecilium-29703265-vmss000001   <none>           <none>

Install Isovalent Enterprise for Cilium

To obtain the helm values to install Isovalent Enterprise for Cilium and access the Enterprise documentation, you need to reach out to sales@isovalent.com and support@isovalent.com

Providers for Azure Arc-enabled Kubernetes

Note- The steps below are valid for all the distributions.

Set the Subscription

Choose the subscription you want to use if you have multiple Azure subscriptions.

  • Replace SubscriptionName with your subscription name.
  • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName

Register providers for Azure Arc-enabled Kubernetes

  • Enter the following commands:
az provider register --namespace Microsoft.Kubernetes
az provider register --namespace Microsoft.KubernetesConfiguration
az provider register --namespace Microsoft.ExtendedLocation
  • Monitor the registration process. Registration may take up to 10 minutes.
az provider show -n Microsoft.Kubernetes -o table
Namespace             RegistrationPolicy    RegistrationState
--------------------  --------------------  -------------------
Microsoft.Kubernetes  RegistrationRequired  Registered


az provider show -n Microsoft.KubernetesConfiguration -o table

Namespace                          RegistrationPolicy    RegistrationState
---------------------------------  --------------------  -------------------
Microsoft.KubernetesConfiguration  RegistrationRequired  Registered


az provider show -n Microsoft.ExtendedLocation -o table

Namespace                   RegistrationPolicy    RegistrationState
--------------------------  --------------------  -------------------
Microsoft.ExtendedLocation  RegistrationRequired  Registered

Create a Service Principal

For Azure Arc, you need to create an identity (user or service principal) that can be used to log in to Azure CLI and connect your cluster to Azure Arc. This step is optional if it has already been created previously.

az ad sp create-for-rbac -n kindazurearc --role contributor --scopes /subscriptions/##############################
Creating 'contributor' role assignment under scope '/subscriptions/##############################'
The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli
{
  "appId": "##################################",
  "displayName": "kindazurearc",
  "password": "##############################",
  "tenant": "#################################"
}

Create a resource group

Create a Resource Group

az group create --name AzureArc --location eastus

Connect an existing Kubernetes cluster

Option 1- Deploying the Azure Arc agents to the cluster using az connectedk8s extension

The command below deploys the Azure Arc agents to the cluster and installs Helm v. 3.6.3 to the .azure folder of the deployment machine. This Helm 3 installation is only used for Azure Arc, and it doesn’t remove or change any previously installed versions of Helm on the machine.

az connectedk8s connect --name kindazurearc --resource-group AzureArc --location eastus

This operation might take a while...
Downloading kubectl client for first time. This can take few minutes...
Downloading helm client for first time. This can take few minutes...
The required pre-checks for onboarding have succeeded.
Azure resource provisioning has begun.
Azure resource provisioning has finished.
Starting to install Azure arc agents on the Kubernetes cluster.
{
  "agentPublicKeyCertificate": "#################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################",
  "agentVersion": null,
  "connectivityStatus": "Connected",
  "distribution": "kind",
  "id": "/subscriptions/#############
##########/resourceGroups/AzureArc/providers/Microsoft.Kubernetes/connectedClusters/kindazurearc",
  "identity": {
    "principalId": "#######################",
    "tenantId": "#######################",
    "type": "SystemAssigned"
  },
  "infrastructure": "generic",
  "kubernetesVersion": null,
  "lastConnectivityTime": null,
  "location": "eastus",
  "managedIdentityCertificateExpirationTime": null,
  "name": "kindazurearc",
  "offering": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "AzureArc",
  "systemData": {
    "createdAt": "2023-12-20T12:09:07.781678+00:00",
    "createdBy": "amit.gupta@isovalent.com",
    "createdByType": "User",
    "lastModifiedAt": "2023-12-20T12:09:07.781678+00:00",
    "lastModifiedBy": "amit.gupta@isovalent.com",
    "lastModifiedByType": "User"
  },
  "tags": {},
  "totalCoreCount": null,
  "totalNodeCount": null,
  "type": "microsoft.kubernetes/connectedclusters"
}

Option 2- Deploying the Azure Arc agents to the cluster using the Azure portal

  • Login to the Azure portal
  • Click > Home > Azure Arc> Add a Kubernetes Cluster with Azure Arc
  • Click > Next
  • Select the subscription and resource group where the kubernetes cluster was created in the previous step.
  • Select the already created resource group ( see above) and select the connectivity method as “Public endpoint”
  • Click > Next
  • You can provide tags (optional)
  • Click > Next
  • You can now copy or download this script onto the Kubernetes cluster and run it.
  • Sample output when the script is run from the Kubernetes cluster
az connectedk8s connect --name "kindazurearc" --resource-group "kubeadm" --location "eastus" --correlation-id "#################################" --tags "."

This operation might take a while...
The required pre-checks for onboarding have succeeded.
Azure resource provisioning has begun.
Azure resource provisioning has finished.
Starting to install Azure arc agents on the Kubernetes cluster.
{
  "agentPublicKeyCertificate": "############################################################################################################################################################################################################################################################################################################################################################################################################################################################################",
  "agentVersion": null,
  "connectivityStatus": "Connected",
  "distribution": "kind",
  "id": "/subscriptions/############ ############/resourceGroups/kubeadm/providers/Microsoft.Kubernetes/connectedClusters/kindazurearc",
  "identity": {
    "principalId": "###################################",
    "tenantId": "###################################",
    "type": "SystemAssigned"
  },
  "infrastructure": "generic",
  "kubernetesVersion": null,
  "lastConnectivityTime": null,
  "location": "eastus",
  "managedIdentityCertificateExpirationTime": null,
  "name": "kindazurearc",
  "offering": null,
  "provisioningState": "Succeeded",
  "resourceGroup": "kubeadm",
  "systemData": {
    "createdAt": "2023-12-20T15:29:46.743307+00:00",
    "createdBy": "amit.gupta@isovalent.com",
    "createdByType": "User",
    "lastModifiedAt": "2023-12-20T15:29:46.743307+00:00",
    "lastModifiedBy": "amit.gupta@isovalent.com",
    "lastModifiedByType": "User"
  },
  "tags": {
    ".": ""
  },
  "totalCoreCount": null,
  "totalNodeCount": null,
  "type": "microsoft.kubernetes/connectedclusters"
}
  • Once the cluster is connected to Azure, click > close

Verify cluster connection

You can verify the cluster connection by running this command:

az connectedk8s list --resource-group <resourceGroup> -o table

Name        Location    ResourceGroup
----------  ----------  ---------------
azureaksvm  eastus      azureaksvm

View Azure Arc agents for Kubernetes

Azure Arc-enabled Kubernetes deploys several agents into the azure-arc namespace.

  • View the deployments and pods using:
kubectl get deployments,pods -n azure-arc

NAME                                                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cluster-metadata-operator    1/1     1            1           88s
deployment.apps/clusterconnect-agent         1/1     1            1           88s
deployment.apps/clusteridentityoperator      1/1     1            1           88s
deployment.apps/config-agent                 1/1     1            1           88s
deployment.apps/controller-manager           1/1     1            1           88s
deployment.apps/extension-events-collector   1/1     1            1           88s
deployment.apps/extension-manager            1/1     1            1           88s
deployment.apps/flux-logs-agent              1/1     1            1           88s
deployment.apps/kube-aad-proxy               1/1     1            1           88s
deployment.apps/logcollector                 1/1     1            1           88s
deployment.apps/metrics-agent                1/1     1            1           88s
deployment.apps/resource-sync-agent          1/1     1            1           88s

NAME                                                                               READY   STATUS    RESTARTS   AGE
pod/cluster-metadata-operator-55d7754dcb-5f9cm    2/2     Running   0          87s
pod/clusterconnect-agent-55cff6cb6c-xscd9         3/3     Running   0          87s
pod/clusteridentityoperator-69f57bbb8f-zmpms      2/2     Running   0          87s
pod/config-agent-85569c4fc6-hkp48                 2/2     Running   0          87s
pod/controller-manager-84bc58855d-bth4t           2/2     Running   0          87s
pod/extension-events-collector-5b647476ff-66fnn   2/2     Running   0          88s
pod/extension-manager-78649745d5-86hj2            3/3     Running   0          88s
pod/flux-logs-agent-865c554867-nr5wk              1/1     Running   0          88s
pod/kube-aad-proxy-5bd88bfc5b-4z5mz               2/2     Running   0          87s
pod/logcollector-769866f87b-npmr6                 1/1     Running   0          87s
pod/metrics-agent-7597589fcd-vqsk9                2/2     Running   0          87s
pod/resource-sync-agent-8cff88d7b-gffv9           2/2     Running   0          88s

Cluster management via Azure Arc

Now that your cluster(s) is connected to Azure, you can view it from the Azure portal. In this case, multiple clusters can now be managed from the Azure portal via Azure Arc.

  • Click > Home > Azure Arc > Kubernetes Clusters

Securely connect to an on-premises Kubernetes Cluster with Azure Arc

You can give users access using RBAC (Role-based access control) and let them connect to the Kubernetes cluster through Azure Arc.

Create a User on the Kubernetes Cluster

  • To authorize a user to access the Kubernetes cluster with the kubeconfig file pointing to the apiserver of your Kubernetes cluster, run this command to create a service account. This example creates the service account in the default namespace, but you can substitute any other namespace for default.
kubectl create serviceaccount demo-user -n default
serviceaccount/demo-user created
kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
  • Create a service account token:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: demo-user-secret
  annotations:
    kubernetes.io/service-account.name: demo-user
type: kubernetes.io/service-account-token
EOF
TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed 's/$/\n/g')
  • Get the token to output to the console
echo $TOKEN

Access the Kubernetes Cluster in the Azure Portal with Azure Arc

When you open the Azure Arc resource in the Azure Portal and go to any Kubernetes resources pane, you will see a message that you must sign in to view the Kubernetes resources. Sign in to view your Kubernetes resources.

Paste the previously created token into the text box and click Sign in. Now, you should see the resources of the Kubernetes cluster.

Access the Kubernetes cluster from your local machine with Azure Arc

Using the Azure Portal to access the Kubernetes cluster is nice, but many users are used to using kubectl. You can access the Kubernetes cluster from your local machine using the Azure CLI command.

az connectedk8s proxy -n <clusterName> -g <resourceGroup> --token <TOKEN>

Replace <TOKEN> with the previously created token. You can use this command on any computer if the Azure CLI is installed. The command downloads the Kubernetes config file, sets the context, and creates a proxy connection to the Kubernetes cluster through Azure Arc.

After the connection is established, open a new terminal window and use kubectl as you are used to. 

Monitor a Kubernetes Cluster with Azure Monitor and Azure Arc

Azure Arc allows you to project your on-premises Kubernetes cluster into Azure. Doing so enables you to manage the cluster from Azure with tools such as Azure Monitor.

Enable Azure Monitor

  • Click Azure Arc> Kubernetes Clusters > (name of cluster) > Insights > Configure Monitoring
  • Click Configure Monitoring and choose a Log Analytics workspace. This will create a new Log Analytics Workspace for the metrics and logs of the extensions. You can also use an existing Work Analytics Workspace.

Note- This usually takes 5-10 minutes before insights show up.

Create Dashboards in the Azure Portal

After you install the extension, it collects metric information and sends it to Azure. This allows you to use Azure Monitor the same way you would use Azure VMs.

Click > Azure Arc > Kubernetes clusters > Open Azure Arc > (name of cluster) > Insights >

For even more insight into your cluster or pods, open the Metrics pane in Azure Arc. There, you can create charts and display useful information. The following screenshot shows a chart that displays the pod count and the CPU percentage used for all nodes.

Click > Azure Arc > Kubernetes clusters > Open Azure Arc > (name of cluster) > Metrics >

Validation-Isovalent Enterprise for Cilium

Run cilium connectivity test (an automated test that checks that Cilium has been deployed correctly and tests intra-node connectivity, inter-node connectivity, and network policies) to verify that everything is working as expected.

Output truncated:

[=] Kind [client-egress-l7-named-port]
........

[=] Skipping Kind [client-egress-l7-tls-deny-without-headers]

[=] Skipping Kind [client-egress-l7-tls-headers]

[=] Skipping Kind [client-egress-l7-set-header]

[=] Skipping Kind [echo-ingress-auth-always-fail]

[=] Skipping Kind [echo-ingress-mutual-auth-spiffe]

[=] Skipping Kind [pod-to-ingress-service]

[=] Skipping Kind [pod-to-ingress-service-deny-all]

[=] Skipping Kind [pod-to-ingress-service-allow-ingress-identity]
[=] Kind [dns-only]
........
[=] Kind [to-fqdns]
........

✅ All 42 tests (182 actions) successful, 13 tests skipped, 1 scenarios skipped.

Conclusion

Hopefully, this post gave you a good overview of integrating an existing Kubernetes cluster running Isovalent Enterprise for Cilium with Azure Arc. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.

Further Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Industry insights you won’t delete. Delivered to your inbox weekly.