Back to blog

Isovalent Enterprise for Cilium on EKS & EKS-A in AWS Marketplace

Amit Gupta
Amit Gupta
Published: Updated: Isovalent
Isovalent Enterprise for Cilium on EKS & EKS-A in AWS Marketplace

We are pleased to announce that Isovalent Enterprise for Cilium is now available in the AWS marketplace. This blog will guide you how to deploy Isovalent Enterprise for Cilium on EKS and EKS-A clusters from the AWS marketplace. This new availability in AWS Marketplace allows customers to:

  • Consume Kubernetes networking, security, and observability as services.
  • Easily find, test, and deploy Cilium.
  • Get started in minutes instead of lengthy deployment cycles.
  • Only pay for services consumed upfront investment commitments.

Cilium is the default CNI for EKS-Anywhere and has been widely adopted by EKS users and customers.

What is Isovalent Enterprise for Cilium?

Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble enables thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.

Why Isovalent Enterprise for Cilium?

For enterprise customers requiring support and/or usage of Advanced Networking, Security, and Observability features, “Isovalent Enterprise for Cilium” is recommended. This offering brings complete flexibility regarding access to Cilium features while retaining the advantageous ease of use and integration with AWS seamlessly.

What are the benefits of Cilium in AWS?

When running in the context of AWS, Cilium can natively integrate with the cloud provider’s SDN (Software Defined Networking). Cilium can speak BGP, route traffic on the network, and represent existing network endpoints with cloud-native identities in an on-premises environment. To the application team using Kubernetes daily, the user experience will be the same regardless of whether the workload runs in Kubernetes clusters backed by public or private cloud infrastructure. Entire application stacks or even entire clusters become portable across clouds.

Cilium has several differentiators that set it apart from other networking and security solutions in the cloud native ecosystem, including:

  • eBPF-based technology: Cilium leverages eBPF technology to provide deep visibility into network traffic and granular control over network connections.
  • Micro-segmentation: Cilium enables micro-segmentation at the network level, allowing organizations to enforce policies that limit communication between different services or workloads.
  • Encryption and authentication: Cilium provides encryption and authentication of all network traffic, ensuring that only authorized parties can access data and resources.
  • Application-aware network security: Cilium provides network firewalling on L3-L7, supporting HTTP, gRPC, Kafka, and other protocols. This enables application-aware network security and protects against attacks that target specific applications or services.
  • Observability: Cilium provides rich observability of Kubernetes and cloud-native infrastructure, allowing security teams to gain security-relevant observability and feed network activity into an SIEM (Security Information and Event Management) solution such as Splunk or Elastic.

Why AWS marketplace?

AWS Marketplace is an online store that contains thousands of IT software applications and services built by industry-leading technology companies. In AWS Marketplace, you can find, try, buy, and deploy the software and services needed to build new solutions and manage your cloud infrastructure. The catalog includes solutions for different industries and technical areas, free trials, and consulting services from AWS partners. Included among these solutions are Kubernetes application-based container offers. These offers contain applications that are meant to run on Kubernetes clusters such as Elastic Kubernetes Service (EKS).

Prerequisites

The following prerequisites need to be taken into account before you proceed with this tutorial:

  • Access to AWS marketplace. Create a new account for free.
  • The Cilium operator requires the following EC2 privileges to perform ENI creation and IP allocation.
  • Install kubectl
  • Install Helm
  • Install eksctl
  • Install awscli
  • Cilium CLI: Cilium Enterprise provides a Cilium CLI tool that automatically collects all the logs and debug information needed to troubleshoot your Cilium Enterprise installation. You can install Cilium CLI for Linux, macOS, or other distributions on their local machine(s) or server(s).
  • Hubble CLI: To access the observability data collected by Hubble, you can install the Hubble CLI. You can install Hubble CLI for Linux, macOS, or other distributions on their local machine (s) or server (s).

Where can I deploy Isovalent Enterprise for Cilium?

Isovalent Enterprise from the AWS marketplace can be deployed on: 

  • An existing EKS cluster
  • A new EKS cluster using QuickLaunch
  • A new EKS-A cluster

1. Installing Isovalent Enterprise for Cilium on an EKS cluster

You can install Isovalent Enterprise for Cilium on an existing EKS cluster or create a new EKS cluster for this tutorial. 

Login to AWS marketplace.
Type “Isovalent” in the search window and select the application.
  • Click> Isovalent Enterprise for Cilium
  • Click> Continue to Subscribe
  • Click> Continue to Configuration
  • Click> Fulfillment Option and select “Helm Chart”
  • Click> Choose a fulfillment option and select “Isovalent Enterprise for Cilium on EKS”
  • Click> Software version> Select v1.12.8-awsmp.* (*-pick the latest version)
  • The Launch method should be selected as “Launch on existing cluster” by default.
  • You must ensure the IAM OIDC provider is associated with the cluster.
  • To use AWS Identity and Access Management (IAM) roles for service accounts, an IAM OIDC provider must exist for your cluster’s OIDC issuer URL.
eksctl utils associate-iam-oidc-provider --region=ap-northeast-1 --cluster=cluster2 --approve
2023-10-11 10:10:46 []  will create IAM Open ID Connect provider for cluster "cluster2" in "ap-northeast-1"
2023-10-11 10:10:47 []  created IAM Open ID Connect provider for cluster "cluster2" in "ap-northeast-1"
  • Create an AWS IAM role and Kubernetes service account.
kubectl create namespace cilium-system

eksctl create iamserviceaccount \
    --name cilium-licensing \
    --namespace cilium-system \
    --cluster <ENTER_YOUR_CLUSTER_NAME_HERE> \
    --region <REGION_NAME_FOR_THE_CLUSTER> \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringFullAccess \
    --attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy \
    --approve \
    --override-existing-serviceaccounts

Output Truncated:

2023-10-27 17:33:12 []  1 iamserviceaccount (cilium-system/cilium-licensing) was included (based on the include/exclude rules)
2023-10-27 17:33:12 [!]  metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
2023-10-27 17:33:12 []  1 task: {
    2 sequential sub-tasks: {
        create IAM role for serviceaccount "cilium-system/cilium-licensing",
        create serviceaccount "cilium-system/cilium-licensing",
    } }2023-10-27 17:33:12 []  building iamserviceaccount stack "eksctl-cluster1-addon-iamserviceaccount-cilium-system-cilium-licensing"
2023-10-27 17:33:12 []  deploying stack "eksctl-cluster1-addon-iamserviceaccount-cilium-system-cilium-licensing"
2023-10-27 17:33:12 []  waiting for CloudFormation stack "eksctl-cluster1-addon-iamserviceaccount-cilium-system-cilium-licensing"
2023-10-27 17:33:43 []  waiting for CloudFormation stack "eksctl-cluster1-addon-iamserviceaccount-cilium-system-cilium-licensing"
2023-10-27 17:33:44 []  created serviceaccount "cilium-system/cilium-licensing"
  • Launch Isovalent Enterprise for Cilium by installing a Helm chart on your Amazon EKS cluster.
    • The Helm CLI version in your launch environment must be 3.7.1.
    • Note- username, password, and path for pulling the image have been hidden here but are available when the user is logged in.
export HELM_EXPERIMENTAL_OCI=1

aws ecr get-login-password \
    --region us-east-1 | helm registry login \
    --username ############# \
    --password-stdin #################.dkr.ecr.#########.amazonaws.com

mkdir awsmp-chart && cd awsmp-chart

helm pull oci://#################.dkr.ecr.#########.amazonaws.com/isovalent/cilium-enterprise-eks --version v1.12.8-awsmp.6

tar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete

helm install cilium-enterprise \
    --namespace cilium-system ./* \
    --set clm.serviceAccounts.name=cilium-licensing \
    --set clm.serviceAccounts.create=false 

Output Truncated:

helm install cilium-enterprise \
    --namespace cilium-system ./* \
    --set clm.serviceAccounts.name=cilium-licensing \
    --set clm.serviceAccounts.create=false
NAME: cilium-enterprise
LAST DEPLOYED: Fri Oct 27 17:37:45 2023
NAMESPACE: cilium-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

2. Optional-Installing Isovalent Enterprise for Cilium on an EKS cluster using QuickLaunch

QuickLaunch helps you easily launch and explore container-based applications. QuickLaunch uses AWS CloudFormation to create an Amazon EKS cluster and Helm charts to launch the application.

Note- Isovalent recommends using QuickLaunch only for early-release testing. For Production Environments, you should follow option 1.

  • Click> Isovalent Enterprise for Cilium
  • Click> Continue to Subscribe
  • Click> Continue to Configuration
  • Click> Fulfillment Option and select “Helm Chart”
  • Click> Choose a fulfillment option and select “Isovalent Enterprise for Cilium on EKS”
  • Click> Software version> Select v1.12.8-awsmp.* (*-pick the latest version)
  • Click> Continue to Launch
  • Click> “Launch on a new EKS cluster with QuickLaunch”
  • Click> QuickLaunch with Cloudformation
  • This will redirect you to fill out details for creating a Cloudformation stack that will be used to create an EKS cluster running Isovalent Enterprise for Cilium.
  • Enter a Stack Name
  • Enter a name for your EKS cluster.

Note- The EKS cluster name should be less than 16 characters. This is a mandatory requirement.

  • The Helm Chart parameters should be left to be set to the defaults populated from the pre-populated CloudFormation template.
  • Select> I acknowledge that AWS CloudFormation might create IAM resources with customized names, and Select> l acknowledge that AWS CloudFormation might require the following capability: CAPABILITY_AUTO_EXPAND
  • Click> Create Stack
  • This will redirect you to the stacks page, where you can create a new CloudFormation Stack.

Accessing the Cluster

To access your EKS cluster created by Quicklaunch, you will need to update your kubectl config:

aws eks update-kubeconfig --region <region-code> --name <cluster-name>

3. Installing Isovalent Enterprise for Cilium in an EKS-A cluster

EKS Anywhere creates a Kubernetes cluster on-premises for a chosen provider. Supported providers include Bare Metal (via Tinkerbell), CloudStack, and vSphere. To manage that cluster, you can run cluster create and delete commands from an Ubuntu or Mac Administrative machine.

Note-

  • Refer to the Pre-requisites section to ensure all dependencies are installed and the administrative machine configured.
  • For EKS-Anywhere
  • EKS-A cluster infrastructure preparation (hardware and inventory management) is not part of the scope of this document. This document assumes you have already handled it before creating a cluster for any provider types.
  • The provider type chosen for this tutorial is docker, a development-only version, not for production. You can choose from a list of providers and modify the commands accordingly.
  • To install an EKS-A cluster on docker, follow these steps outlined.
  • All EKS-A clusters are deployed with the base edition of Cilium, which must be uninstalled before upgrading to Isovalent Enterprise for Cilium. An upcoming release will support an automatic upgrade from the default Cilium image to Isovalent Enterprise for Cilium. This can be achieved in two ways (you can use either):
  • spec.clusterNetwork.cniConfig.cilium.skipUpgrade is set to true at either cluster creation or you can also upgrade your EKS cluster.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: cilium
spec:
  clusterNetwork:
    cniConfig:
      cilium:
        skipUpgrade: true
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 1
  datacenterRef:
    kind: DockerDatacenterConfig
    name: cilium
  externalEtcdConfiguration:
    count: 1
  kubernetesVersion: "1.29"
  managementCluster:
    name: cilium
  workerNodeGroupConfigurations:
  - count: 2
    name: md-0

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
  name: cilium
spec: {}

---
  • Deleting the serviceaccount, clusterrole(s), clusterrolebinding(s)
kubectl delete serviceaccount cilium --namespace kube-system
kubectl delete serviceaccount cilium-operator --namespace kube-system
kubectl delete secret hubble-ca-secret --namespace kube-system
kubectl delete secret hubble-server-certs --namespace kube-system
kubectl delete configmap cilium-config --namespace kube-system
kubectl delete clusterrole cilium
kubectl delete clusterrolebinding cilium
kubectl delete clusterrolebinding cilium-operator
kubectl delete secret cilium-ca --namespace kube-system
kubectl delete service hubble-peer --namespace kube-system
kubectl delete service cilium-agent --namespace kube-system
kubectl delete daemonset cilium --namespace kube-system
kubectl delete deployment cilium-operator --namespace kube-system
kubectl delete clusterrole cilium-operator
kubectl delete role  cilium-config-agent -n kube-system
kubectl delete rolebinding cilium-config-agent -n kube-system

Steps:

  • Click> Isovalent Enterprise for Cilium
  • Click> Continue to Subscribe
  • Click> Continue to Configuration
  • Click> Fulfillment Option and select “Helm Chart”
  • Click> Choose a fulfillment option and select “Isovalent Enterprise for Cilium on EKS Anywhere”
  • Click> Software version> Select v1.12.8-awsmp.* (*-pick the latest version)
  • Click> Continue to Launch
  • The launch target is set to “Self-Managed Kubernetes”
  • Create a license token and IAM role. Choose Create token to generate a license token and AWS IAM role. These will be used to access the AWS License Manager APIs for billing and metering. You can use an existing token if you have one, and make sure that the following permissions are granted to the token:
"license-manager:CheckoutLicense",
"license-manager:CheckInLicense",
"license-manager:ExtendLicenseConsumption",
"license-manager:GetLicense",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:GetAuthorizationToken"

  • To create the IAM role you will need to Grant Permission
  • Save the token and IAM role as a Kubernetes secret
  • Note- username, password, and path for pulling the image have been hidden here but are available when the user is logged in.
kubectl create namespace cilium-system

kubectl create serviceaccount cilium-licensing --namespace cilium-system

AWSMP_TOKEN=<CREATE_TOKEN_ABOVE>

AWSMP_ROLE_ARN=<CREATE_ROLE_ABOVE>

kubectl create secret generic awsmp-license-token-secret \
--from-literal=license_token=$AWSMP_TOKEN \
--from-literal=iam_role=$AWSMP_ROLE_ARN \
--namespace cilium-system

AWSMP_ACCESS_TOKEN=$(aws license-manager get-access-token \
    --output text --query '*' --token $AWSMP_TOKEN --region us-east-1)

AWSMP_ROLE_CREDENTIALS=$(aws sts assume-role-with-web-identity \
                --region 'us-east-1' \
                --role-arn $AWSMP_ROLE_ARN \
                --role-session-name 'AWSMP-guided-deployment-session' \
                --web-identity-token $AWSMP_ACCESS_TOKEN \
                --query 'Credentials' \
                --output text)   
                
export AWS_ACCESS_KEY_ID=$(echo $AWSMP_ROLE_CREDENTIALS | awk '{print $1}' | xargs)

export AWS_SECRET_ACCESS_KEY=$(echo $AWSMP_ROLE_CREDENTIALS | awk '{print $3}' | xargs)

export AWS_SESSION_TOKEN=$(echo $AWSMP_ROLE_CREDENTIALS | awk '{print $4}' | xargs)

kubectl create secret docker-registry awsmp-image-pull-secret \
--docker-server=############.dkr.ecr.###########.amazonaws.com \
--docker-username=############ \
--docker-password=$(aws ecr get-login-password --region us-east-1) \
--namespace cilium-system

kubectl patch serviceaccount cilium-licensing \
--namespace cilium-system \
-p '{"imagePullSecrets": [{"name": "awsmp-image-pull-secret"}]}'
  • Install Isovalent Enterprise for Cilium by installing a Helm chart from Amazon Elastic Container Registry (ECR).
  • The Helm CLI version in your launch environment must be 3.7.1.
  • Note- username, password, and path for pulling the image have been hidden here but are available when the user is logged in.
export HELM_EXPERIMENTAL_OCI=1

aws ecr get-login-password \
    --region us-east-1 | helm registry login \
    --username ############# \
    --password-stdin ############.dkr.ecr.##########.amazonaws.com

mkdir awsmp-chart && cd awsmp-chart

helm pull oci://############.dkr.ecr.##########.amazonaws.com/isovalent/cilium-enterprise-eks-anywhere --version v1.12.8-awsmp.6

tar xf $(pwd)/* && find $(pwd) -maxdepth 1 -type f -delete

helm install cilium-enterprise \
    --namespace cilium-system ./* \
    --set clm.licenseSecretName=awsmp-license-token-secret \
    --set clm.serviceAccounts.name=cilium-licensing \
    --set clm.serviceAccounts.create=false

Output Truncated:

helm install cilium-enterprise \
    --namespace cilium-system ./* \
    --set clm.licenseSecretName=awsmp-license-token-secret \
    --set clm.serviceAccounts.name=cilium-licensing \
    --set clm.serviceAccounts.create=false 
NAME: cilium-enterprise
LAST DEPLOYED: Fri Oct 27 22:55:58 2023
NAMESPACE: cilium-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

Validation- Isovalent Enterprise for Cilium

The validation part remains the same for an EKS or EKS-A cluster running Isovalent Enterprise for Cilium

Validate the Installation

To validate that Cilium has been properly installed with the correct version, run the following command cilium-status , and you will see that Cilium is managing all the pods. They are in “Ready” state and are “Available”.

cilium status --namespace cilium-system

    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled


Deployment        hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
Containers:       cilium-operator    Running: 2
                  cilium             Running: 2
                  hubble-relay       Running: 1
                  hubble-ui          Running: 1
Cluster Pods:     6/6 managed by Cilium
Image versions    hubble-ui          709825985650.dkr.ecr.us-east-1.amazonaws.com/isovalent/hubble-ui-enterprise:v0.18.3: 1
                  hubble-ui          709825985650.dkr.ecr.us-east-1.amazonaws.com/isovalent/hubble-ui-enterprise-backend:v0.18.3: 1
                  cilium-operator    709825985650.dkr.ecr.us-east-1.amazonaws.com/isovalent/operator-aws-cee:v1.12.8-cee.1: 2
                  cilium             709825985650.dkr.ecr.us-east-1.amazonaws.com/isovalent/cilium-cee:v1.12.8-cee.1: 2
                  hubble-relay       709825985650.dkr.ecr.us-east-1.amazonaws.com/isovalent/hubble-relay-cee:v1.12.8-cee.1: 1

Cluster and Cilium Health Check

Check the nodes’ status and ensure they are in a “Ready” state

kubectl get nodes -o wide

NAME                                                 STATUS   ROLES    AGE   VERSION                INTERNAL-IP       EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-153-75.ap-southeast-2.compute.internal    Ready    <none>   53m   v1.25.13-eks-43840fb   192.168.153.75    <none>        Amazon Linux 2   5.10.192-183.736.amzn2.x86_64   containerd://1.6.19
ip-192-168-160-155.ap-southeast-2.compute.internal   Ready    <none>   53m   v1.25.13-eks-43840fb   192.168.160.155   <none>        Amazon Linux 2   5.10.192-183.736.amzn2.x86_64   containerd://1.6.19

cilium-health which is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. Use cilium-health to get visibility into the overall health of the cluster’s networking connectivity.

kubectl -n cilium-system exec ds/cilium -- cilium-health status

Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), wait-for-node-init (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2023-10-27T12:17:18Z
Nodes:
  ip-192-168-160-155.ap-southeast-2.compute.internal (localhost):
    Host connectivity to 192.168.160.155:
      ICMP to stack:   OK, RTT=254.975µs
      HTTP to agent:   OK, RTT=469.853µs
    Endpoint connectivity to 192.168.180.23:
      ICMP to stack:   OK, RTT=256.425µs
      HTTP to agent:   OK, RTT=779.258µs
  ip-192-168-153-75.ap-southeast-2.compute.internal:
    Host connectivity to 192.168.153.75:
      ICMP to stack:   OK, RTT=1.042051ms
      HTTP to agent:   OK, RTT=1.533389ms
    Endpoint connectivity to 192.168.147.108:
      ICMP to stack:   OK, RTT=1.049822ms
      HTTP to agent:   OK, RTT=1.758836ms

Cilium Connectivity Test

The cilium connectivity test command deploys a series of services and deployments, and CiliumNetworkPolicy will use various connectivity paths to connect. Connectivity paths include with and without service load-balancing and various network policy combinations.

Output Truncated:

cilium connectivity test -n cilium-system

ℹ️  Monitor aggregation detected, will skip some flow validation steps
[cluster1.ap-southeast-2.eksctl.io] Creating namespace cilium-test for connectivity check...
[cluster1.ap-southeast-2.eksctl.io] Deploying echo-same-node service...
[cluster1.ap-southeast-2.eksctl.io] Deploying DNS test server configmap...
[cluster1.ap-southeast-2.eksctl.io] Deploying same-node deployment...
[cluster1.ap-southeast-2.eksctl.io] Deploying client deployment...
[cluster1.ap-southeast-2.eksctl.io] Deploying client2 deployment...
[cluster1.ap-southeast-2.eksctl.io] Deploying echo-other-node service...
[cluster1.ap-southeast-2.eksctl.io] Deploying other-node deployment...
[host-netns] Deploying cluster1.ap-southeast-2.eksctl.io daemonset...
[host-netns-non-cilium] Deploying cluster1.ap-southeast-2.eksctl.io daemonset...
[cluster1.ap-southeast-2.eksctl.io] Deploying echo-external-node deployment...
ℹ️  Skipping IPCache check
🔭 Enabling Hubble telescope...
ℹ️  Expose Relay locally with:
   cilium hubble enable
   cilium hubble port-forward&
ℹ️  Cilium version: 1.12.8
🏃 Running tests...
✅ All 42 tests (280 actions) successful, 12 tests skipped, 1 scenarios skipped.

Validate Hubble API access

To get temporary access to the Hubble API, create a port forward to the Hubble service from your local machine or server. This will allow you to connect the Hubble client to the local port 4245 and access the Hubble Relay service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.

kubectl port-forward -n cilium-system svc/hubble-relay 4245:80

Validate that you have access to the Hubble API via the installed CLI and notice that both the nodes are connected and flows are being accounted for.

hubble status

Healthcheck (via localhost:4245): Ok
Current/Max Flows: 4,651/8,190 (56.79%)
Flows/s: 5.75
Connected Nodes: 2/2

Run hubble observe command in a different terminal against the local port to observe cluster-wide network events through Hubble Relay:

hubble observe --server localhost:4245 --follow

Oct 27 12:32:56.756: default/client:46874 (ID:22689) <- default/server:80 (ID:48454) to-stack FORWARDED (TCP Flags: ACK, FIN, PSH)
Oct 27 12:32:56.757: default/client:46874 (ID:22689) <- default/server:80 (ID:48454) to-endpoint FORWARDED (TCP Flags: ACK, FIN, PSH)
Oct 27 12:32:56.757: default/client:46874 (ID:22689) -> default/server:80 (ID:48454) to-stack FORWARDED (TCP Flags: ACK, RST)
Oct 27 12:32:56.757: default/client:46874 (ID:22689) -> default/server:80 (ID:48454) to-endpoint FORWARDED (TCP Flags: ACK, RST)
Oct 27 12:32:58.759: default/client:38059 (ID:22689) -> kube-system/coredns-6486684487-6bqrt:53 (ID:54645) to-stack FORWARDED (UDP)
Oct 27 12:32:58.759: default/client:38059 (ID:22689) -> kube-system/coredns-6486684487-6bqrt:53 (ID:54645) to-endpoint FORWARDED (UDP)
Oct 27 12:32:58.759: default/client:38059 (ID:22689) <- kube-system/coredns-6486684487-6bqrt:53 (ID:54645) to-stack FORWARDED (UDP)
Oct 27 12:32:58.759: default/client:38059 (ID:22689) <- 

Accessing the Hubble UI

To get temporary access to the Hubble UI, create a port forward to the Hubble service from your local machine or server. This will allow you to connect to the local port 12000 and access the Hubble UI service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.

kubectl port-forward -n cilium-system svc/hubble-ui 12000:80
  • This will redirect you to http://localhost:12000 in your browser.
  • You should see a screen with an invitation to select a namespace; use the namespace selector dropdown on the left top corner to select a namespace:

Troubleshooting

The default EKS-A cluster has Cilium installed/running by default, and you must uninstall the default version of Cilium, else you will be prompted with the error message below:

Error: INSTALLATION FAILED: Unable to continue with install: ClusterRole "cilium" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "cilium-enterprise"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "cilium-system"

It would be best if you made sure that your ~/.aws/config is pointing to the correct region, else the described operation will fail as below:

eksctl create iamserviceaccount \
-name cilium-licensing \
-namespace cilium-system \
-cluster cluster1 \
-attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringFullAccess \|
-attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage \
-attach-policy-arn arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy \ -approve \
-override-existing-serviceaccounts

Error: unable to describe cluster control plane: operation error EKS: DescribeCluster, https response error StatusCode: 404, RequestID: 96a7b1b7-34d5-466b-a3e9-300bd883b5ee, ResourceNotFoundException: No cluster found for name: cluster1.

While following the instructions to create a cluster, it’s mandatory to have an IAM OIDC provider enabled, without which the IAM service account cannot be enabled. If it’s not enabled, you will see an error message as below:

eksctl create iamserviceaccount \
—name cilium-licensing \
—namespace cilium-system \
—cluster cluster2 \
--attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringFullAccess \
--attach-policy-arn arn:aws:iam::aws:policy/AWSMarketplaceMeteringRegisterUsage \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AWSLicenseManagerConsumptionPolicy \
—approve \
—override-existing-serviceaccounts


2023-10-11 10:10:14 [!] no IAM OIDC provider associated with cluster, try 'eksctl utils associate-iam-oidc-provider —region=ap-northeast-1 —cluster=cluster2’
Error: unable to create iamserviceaccount(s) without IAM OIDC provider enabled

Conclusion

Hopefully, this post gave you a good overview of installing Cilium in the AWS marketplace on EKS and EKS-A clusters. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.

Try it out

These learning tracks focus on features important to engineers using Cilium in cloud environments. Cilium comes in different flavors, whether using GKE, AKS, or EKS; not all of these features will apply to each managed Kubernetes Service. However, it should give you a good idea of features relevant to operating Cilium in cloud environments.

Suggested Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Related

Blogs

Cilium in EKS-Anywhere

This tutorial will do a deep dive into how to bring up an EKS-A cluster then upgrading the embedded Cilium with either Cilium OSS or Cilium Enterprise to unlock more features

By
Amit Gupta
Blogs

AWS picks Cilium for Networking & Security on EKS Anywhere

Learn why AWS has picked Cilium as their default Kubernetes CNI for Networking & Security on EKS Anywhere

By
Thomas Graf

Industry insights you won’t delete. Delivered to your inbox weekly.