Back to blog

Cilium in EKS-Anywhere

Amit Gupta
Amit Gupta
Published: Updated: Cilium
Cilium in EKS-Anywhere

It wasn’t that far back when AWS announced Cilium as the default Container Network Interface (CNI) for EKS-Anywhere (EKS-A). As you create an EKS-A cluster, you will automatically have Cilium installed and benefit from the powers of eBPF. However, an EKS-A cluster with the default Cilium image has a limited feature set. We can unlock the feature set for your EKS-A clusters by upgrading the embedded Cilium with either Cilium OSS or Cilium Enterprise. Let’s dive into it and take a look with a hands-on tutorial.

What are the benefits of Cilium in AWS?

When running in the context of AWS, Cilium can natively integrate with the cloud provider’s SDN (Software Defined Networking). Cilium can speak BGP, route traffic on the network, and represent existing network endpoints with cloud-native identities in an on-premises environment. To the application team using Kubernetes daily, the user experience will be the same regardless of whether the workload runs in Kubernetes clusters backed by public or private cloud infrastructure. Entire application stacks or even entire clusters become portable across clouds.

Cilium has several differentiators that set it apart from other networking and security solutions in the cloud native ecosystem, including:

  • eBPF-based technology: Cilium leverages eBPF technology to provide deep visibility into network traffic and granular control over network connections.
  • Micro-segmentation: Cilium enables micro-segmentation at the network level, allowing organizations to enforce policies that limit communication between different services or workloads.
  • Encryption and authentication: Cilium provides encryption and authentication of all network traffic, ensuring that only authorized parties can access data and resources.
  • Application-aware network security: Cilium provides network firewalling on L3-L7, supporting HTTP, gRPC, Kafka, and other protocols. This enables application-aware network security and protects against attacks that target specific applications or services.
  • Observability: Cilium provides rich observability of Kubernetes and cloud-native infrastructure, allowing security teams to gain security-relevant observability and feed network activity into an SIEM (Security Information and Event Management) solution such as Splunk or Elastic.

As a part of a two-series blog, Part I (current tutorial) will do a deep dive into how to create an EKS-A cluster and upgrade it with Cilium OSS, and in Part II, you can see the benefits that Cilium provides via its rich feature set with Isovalent Enterprise for Cilium. You can read more about the announcement by reading Thomas Graf’s blog post and AWS EKS-A official documentation

What is EKS-Anywhere in brief?

EKS Anywhere creates a Kubernetes cluster on-premises for a chosen provider. Supported providers include Bare Metal (via Tinkerbell), CloudStack, and vSphere. To manage that cluster, you can run cluster create and delete commands from an Ubuntu or Mac Administrative machine.

Creating a cluster involves downloading EKS Anywhere tools to an administrative machine and then running the eksctl anywhere to create a cluster command to deploy that cluster to the provider. A temporary bootstrap cluster runs on the administrative machine to direct the creation of the target cluster.

  • EKS anywhere uses Amazon EKS Distro (EKS-D), a Kubernetes distribution customized and open-sourced by AWS. It is the same distro that powers the AWS-managed EKS. This means that when you install EKS anywhere, it has parameters and configurations optimized for AWS.
  • Also, you can register the EKS anywhere clusters to the AWS EKS console using the EKS connector. Once the cluster is registered, you can visualize all the anywhere cluster components in the AWS EKS console.
  • EKS connector is a Statefulset that runs your cluster’s AWS System Manager Agent. It is responsible for maintaining the connection between EKS anywhere cluster and AWS.

Common Question- What is the difference between EKS and EKS-Anywhere?

You can read more on the subtle differences between EKS-A and EKS, we will outline a few critical ones pertaining to this tutorial. 

Amazon EKS-AnywhereAmazon Elastic Kubernetes Service 
It is a managed Kubernetes service that makes it easy for you to run Kubernetes on the AWS cloud. Amazon EKS is certified as a Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS.It is a managed Kubernetes service that makes it easy to run Kubernetes on the AWS cloud. Amazon EKS is certified as a Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS.

Feature Availability

Cilium on AWS  is a powerful networking and security solution for Kubernetes environments and is enabled by default in the eksa* image, but you can upgrade to either Cilium base edition or Enterprise Edition to unlock more features (see table below).

Note- P=Partial indicates a subset of features under the respective feature would be available in the Enterprise version. e.g., in the case of clustermesh, if you have a clustermesh scenario with Overlapping IP addresses, then the same would be available by upgrading to Isovalent Enterprise for Cilium. Basic clustermesh features would work with Cilium OSS.

Headline/ Feature
Cilium Embedded In AWS EKS-Anywhere
Cilium OSS
Isovalent Enterprise For Cilium
Network Routing (CNI)
Basic Network Policy (Labels and CIDR rules)
Advanced Network Policy (L7 rules)
Network Load-Balancing (L3/L4)
Service Mesh & L7 Load-Balancing (Ingress, GatewayAPI)
Multi-Cluster (Load-Balancing, Policy)
Transit Gateway & Non-Kubernetes Workloads
Encryption (IPsec, Wireguard, mutual authN)
Advanced Routing (BGP, Egress Gateway)
Basic Hubble Network Observability (Metrics, Logs, OpenTelemetry)
Advanced Hubble Network Observability (SIEM & Storage and Analytics & RBAC)
Tetragon Runtime Security (observability, enforcement, SIEM)
Enterprise-Hardened Cilium Distribution, Training, 24×7 Enterprise Grade Support

Note: EKS-A cluster infrastructure preparation (hardware and inventory management) is not part of the scope of this document. This document assumes you handled it before creating a cluster of any provider type.

Step 1: Preparing the Administrative Machine

The Administrative machine (Admin machine) is required to run cluster lifecycle operations, but EKS Anywhere clusters do not require a continuously running Admin machine. Critical cluster artifacts, including the kubeconfig file, SSH keys, and the full cluster specification yaml, are saved to the Admin machine during cluster creation. These files are required when running any subsequent cluster lifecycle operations. 

Administrative machine prerequisites

Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you will run Docker and add some binaries. From there, you create the cluster for your chosen provider.

  • Docker 20.x.x
  • Mac OS 10.15 / Ubuntu 20.04.2 LTS 
  • 4 CPU cores
  • 16GB memory
  • 30GB free disk space
  • The administrative machine must be on the same Layer 2 network as the cluster machines (Bare Metal provider only).
  • If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation.
  • If you use Ubuntu 21.10 or 22.04, you must switch from cgroups v2 to cgroups v1.
  • If you are using Docker Desktop:
    • For EKS Anywhere Bare Metal, Docker Desktop is not supported.
    • For EKS Anywhere vSphere, if you are using Mac OS Docker Desktop 4.4.2 or newer "deprecatedCgroupv1": true must be set in ~/Library/Group\Containers/

EKS-A Cluster Prerequisites

The following prerequisites need to be taken into account before you proceed with this tutorial:

  • IAM principal has been configured and has specific permissions.
  • Curated packages- These are available to customers with the EKS-A enterprise subscription.
    • For this document, you don’t need to install these packages, as cluster creation will succeed if authentication is not set up with some warnings that can be ignored.
  • Firewall ports and services need to be allowed.
  • If you run Cilium in an environment requiring firewall rules to enable connectivity, you must add the respective firewall rules to ensure Cilium works properly.
    • Inbound Rules
  • Outbound Rules

Step 2: Installing the dependencies 

Note: The administrative machine for this tutorial is based on Ubuntu 20.04.6


Install docker

sudo apt-get remove docker docker-engine containerd runc
sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli docker-buildx-plugin docker-compose-plugin


A command-line tool for working with EKS clusters that automates many individual tasks. See Installing or updating eksctl in the Amazon EKS user guide.

curl "$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin/


This will let you create a cluster in multiple providers for local development or production workloads.

RELEASE_VERSION=$(curl --silent --location | yq ".spec.latestVersion")
curl "${RELEASE_NUMBER}/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/amd64/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz" \
    --silent --location \
    | tar xz ./eksctl-anywhere
sudo mv ./eksctl-anywhere /usr/local/bin/


A command-line tool for working with Kubernetes clusters. See Installing or updating kubectl in the Amazon EKS user guide.

export OS="$(uname -s | tr A-Z a-z)" ARCH=$(test "$(uname -m)" = 'x86_64' && echo 'amd64' || echo 'arm64')
curl -LO "$(curl -L -s${OS}/${ARCH}/kubectl"
sudo mv ./kubectl /usr/local/bin
sudo chmod +x /usr/local/bin/kubectl


A command-line tool for working with AWS services, including Amazon EKS. See Installing, updating, and uninstalling the AWS CLI in the AWS Command Line Interface User Guide. 

curl "" -o ""
sudo apt install unzip
sudo ./aws/install


Helm helps you manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Make sure you have Helm 3 installed.

curl | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm


A lightweight and portable command-line YAML, JSON, and XML processor. yq uses jq like syntax but works with yaml files and json, xml, properties, csv, and tsv. 

wget -O /usr/bin/yq &&\
    chmod +x /usr/bin/yq

Step 3: Cluster creation on EKS-A

The command below creates a file named <$CLUSTER_NAME>.yaml with the contents below in the path where it is executed. The configuration specification is divided into two sections:

  • Cluster
  • Docker DatacenterConfig
  • The provider type chosen for this tutorial is docker which is a development-only version and not for production. You can choose from a list of providers and modify the commands accordingly.
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
   --provider docker > $CLUSTER_NAME.yaml


A sample CLUSTER_NAME.yaml file

kind: Cluster
  name: mgmt
      cilium: {}
    count: 1
    kind: DockerDatacenterConfig
    name: mgmt
    count: 1
  kubernetesVersion: "1.29"
    name: mgmt
  - count: 1
    name: md-0

kind: DockerDatacenterConfig
  name: mgmt
spec: {}


Create a cluster using the $CLUSTER_NAME.yaml file from above

eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
Performing setup and validations
✅ validation succeeded {"validation": "docker Provider setup is valid"}
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific setup
Creating new workload cluster
Installing networking on workload cluster
Installing cluster-api providers on workload cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing GitOps Toolkit on workload cluster
GitOps field not specified, bootstrap flux skipped
Deleting bootstrap cluster
🎉 Cluster created!
The Amazon EKS Anywhere Curated Packages are only available to customers with the
Amazon EKS Anywhere Enterprise Subscription

Step 4: Accessing the cluster

Once the cluster is created, access it using the generated KUBECONFIG file in your local directory.

export KUBECONFIG=/home/ubuntu/mgmt/mgmt-eks-a-cluster.kubeconfig

Step 5: Validating the Default Cilium version

As outlined in the features section, EKS-A comes by default with Cilium as the CNI, and the image is suffixed with *-eksa.*

kubectl -n kube-system exec ds/cilium -- cilium version
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Client: 1.12.11-eksa.1 a71bd065 2023-06-20T22:57:43+00:00 go version go1.18.10 linux/arm64
Daemon: 1.12.11-eksa.1 a71bd065 2023-06-20T22:57:43+00:00 go version go1.18.10 linux/arm64

Step 6: Deploying a test workload (Optional)

EKS-A with eksa images and default Cilium has a limited set of features. You can create EKS-A test workloads and check out basic connectivity and network policy tests. The AWS examples in the documentation clearly explain how to get started. But as highlighted earlier, the default Cilium version with EKS-Anywhere is limited. Let’s install the fully-featured Cilium and review some of the additional features that come with it.

Step 7: Upgrade to Cilium OSS

Many advanced features of Cilium are not yet enabled as part of EKS Anywhere, including Hubble observability, DNS-aware and HTTP-Aware Network Policy, Multi-cluster Routing, Transparent Encryption, and Advanced Load-balancing. You will upgrade the EKS-A cluster from the default image to Cilium.
Note: You can also upgrade to Cilium Enterprise, the Enterprise-grade, hardened solution that addresses complex security automation, role-based access control, and integration workflows with legacy infrastructure. You can contact our sales teams at, and they can get you started with a demo and the next steps.

Install Cilium CLI

The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g., clustermesh, Hubble).

You can install Cilium CLI for Linux, macOS, or other distributions on their local machine(s) or server(s).

if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

Install Hubble CLI

To access the observability data collected by Hubble, you can install the Hubble CLI. You can install Hubble CLI for Linux, macOS, or other distributions on their local machine (s) or server (s).

export HUBBLE_VERSION=$(curl -s
if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}

Install Cilium & Hubble

Set up Helm repository:

helm repo add cilium

Deploy Cilium using helm:

helm install cilium cilium/cilium --version 1.14.10 \
  --namespace kube-system \
  --set eni.enabled=false \
  --set ipam.mode=kubernetes \
  --set egressMasqueradeInterfaces=eth0 \
  --set tunnel=geneve \
  --set hubble.metrics.enabled="{dns,drop,tcp,flow,icmp,http}" \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true 

What do these values signify:

Flags Value(s)
eni.enabled=falseWe are not using the native AWS ENI datapath.
ipam.mode=KubernetesTo limit the network interface on which masquerading should be performed, the option is used.
egressMasqueradeInterfacesThe option is used to limit the network interface on which masquerading should be performed.
tunnel=geneve/ vxlanThe encapsulation configuration for communication between nodes.
hubble.metrics.enabledHubble metrics configuration
hubble.relay.enabledEnabling hubble relay service
hubble.ui.enabledEnabling the graphical service map

Note- It is expected that the installation for Cilium might not go through, and you will have to delete a few accounts, secrets, clusterrolebinding, and clusterroles. This will be fixed in an upcoming release.

kubectl delete serviceaccount cilium --namespace kube-system
kubectl delete serviceaccount cilium-operator --namespace kube-system
kubectl delete secret hubble-ca-secret --namespace kube-system
kubectl delete secret hubble-server-certs --namespace kube-system
kubectl delete configmap cilium-config --namespace kube-system
kubectl delete clusterrole cilium
kubectl delete clusterrolebinding cilium
kubectl delete clusterrolebinding cilium-operator
kubectl delete secret cilium-ca --namespace kube-system
kubectl delete service hubble-peer --namespace kube-system
kubectl delete daemonset cilium --namespace kube-system
kubectl delete deployment cilium-operator --namespace kube-system
kubectl delete clusterrole cilium-operator
kubectl delete role cilium-config-agent -n kube-system
kubectl delete rolebinding cilium-config-agent -n kube-system

Validate the installation

To validate that Cilium has been properly installed with the correct version, run the following command cilium-status , and you will see that Cilium is managing all the pods. They are in “Ready” state and are “Available”.

cilium status
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

Deployment        hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Containers:       hubble-relay       Running: 1
                  cilium             Running: 2
                  hubble-ui          Running: 1
                  cilium-operator    Running: 2
Cluster Pods:     20/20 managed by Cilium
Image versions    hubble-relay 1
                  cilium    2
                  hubble-ui 1
                  hubble-ui 1
                  cilium-operator 2

Cluster and Cilium Health Check

Check the status of the nodes and make sure they are in a “Ready” state.

kubectl get nodes -o wide
NAME                                      STATUS   ROLES           AGE     VERSION               INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
mgmt-j5dbj (localhost)                         Ready    control-plane   6h12m   v1.27.4-eks-cedffd4    <none>        Amazon Linux 2023   5.15.49-linuxkit   containerd://1.6.19
mgmt-md-0-68884d88b9-vc8rb   Ready    <none>          6h12m   v1.27.4-eks-cedffd4    <none>        Amazon Linux 2023   5.15.49-linuxkit   containerd://1.7.2

cilium-health is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. Use cilium-health to get visibility into the overall health of the cluster’s networking connectivity.

kubectl -n kube-system exec ds/cilium -- cilium-health status
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2023-04-04T04:33:54Z
  mgmt-j5dbj (localhost):
    Host connectivity to
      ICMP to stack:   OK, RTT=848.391µs
      HTTP to agent:   OK, RTT=180.512µs
    Endpoint connectivity to
      ICMP to stack:   OK, RTT=917.135µs
      HTTP to agent:   OK, RTT=282.271µs
    Host connectivity to
      ICMP to stack:   OK, RTT=896.943µs
      HTTP to agent:   OK, RTT=347.153µs
    Endpoint connectivity to
      ICMP to stack:   OK, RTT=889.663µs
      HTTP to agent:   OK, RTT=590.454µs

Cilium Connectivity Test

The cilium connectivity test command deploys a series of services and deployments, and CiliumNetworkPolicy will use various connectivity paths to connect. Connectivity paths include with and without service load-balancing and various network policy combinations.

Output Truncated:

cilium connectivity test 

ℹ️  Monitor aggregation detected, will skip some flow validation steps
[cilium-eksa] Creating namespace cilium-test for connectivity check...
[cilium-eksa] Deploying echo-same-node service...
[cilium-eksa] Deploying DNS test server configmap...
[cilium-eksa] Deploying same-node deployment...
[cilium-eksa] Deploying client deployment...
[cilium-eksa] Deploying client2 deployment...
🔭 Enabling Hubble telescope...
ℹ️  Expose Relay locally with:
   cilium hubble enable
   cilium hubble port-forward&
ℹ️  Cilium version: 1.14.0

✅ All 42 tests (191 actions) successful, 12 tests skipped, 1 scenarios skipped.

Validate Hubble API access

To access the Hubble API, create a port forward to the Hubble service from your local machine or server. This will allow you to connect the Hubble client to the local port 4245 and access the Hubble Relay service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.

kubectl port-forward -n kube-system svc/hubble-relay 4245:80

Validate that you have access to the Hubble API via the installed CLI and notice that both the nodes are connected and flows are being accounted for.

hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8,190/8,190 (100.00%)
Flows/s: 38.51
Connected Nodes: 2/2

Run hubble observe command in a different terminal against the local port to observe cluster-wide network events through Hubble Relay:

  • In this case, a client app sends a “wget request” to a server every few seconds, and that transaction can be seen below.
hubble observe --server localhost:4245 --follow

Sep  7 09:11:51.915: (ID:64881) -> (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: (ID:64881) -> (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: default/client:35552 (ID:458) -> default/server:80 (ID:2562) to-stack FORWARDED (TCP Flags: SYN)
Sep  7 09:11:51.915: (ID:458) -> (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: (ID:64881) -> (host) to-stack FORWARDED (IPv4)
Sep  7 09:11:51.916: (ID:2562) -> (host) to-stack FORWARDED (IPv4)
Sep  7 09:11:51.917: (ID:458) -> (host) to-stack FORWARDED (IPv4)

Accessing the Hubble UI

To access the Hubble UI, create a port forward to the Hubble service from your local machine or server. This will allow you to connect to the local port 12000 and access the Hubble UI service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.

kubectl port-forward -n kube-system svc/hubble-ui 12000:80
  • This will redirect you to http://localhost:12000 in your browser.
  • You should see a screen with an invitation to select a namespace; use the namespace selector dropdown on the left top corner to select a namespace:


Hopefully, this post gave you a good overview of how to install Cilium in  EKS-Anywhere. In part II of this blog series, we will discuss the features you can enable with Cilium. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.

Try it Out

Suggested Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer


Isovalent Enterprise for Cilium on EKS & EKS-A in AWS Marketplace

Isovalent Enterprise for Cilium is now available in the AWS marketplace.

Isovalent Enterprise for Cilium on EKS & EKS-A in AWS Marketplace
Amit Gupta

Industry insights you won’t delete. Delivered to your inbox weekly.