
It wasn’t that far back when AWS announced Cilium as the default Container Network Interface (CNI) for EKS-Anywhere (EKS-A). As you create an EKS-A cluster, you will automatically have Cilium installed and benefit from the powers of eBPF. However, an EKS-A cluster with the default Cilium image has a limited feature set. We can unlock the feature set for your EKS-A clusters by upgrading the embedded Cilium with either Cilium OSS or Cilium Enterprise. Let’s dive into it and take a look with a hands-on tutorial.
What are the benefits of Cilium in AWS?
When running in the context of AWS, Cilium can natively integrate with the cloud provider’s SDN (Software Defined Networking). Cilium can speak BGP, route traffic on the network, and represent existing network endpoints with cloud-native identities in an on-premises environment. To the application team using Kubernetes daily, the user experience will be the same regardless of whether the workload runs in Kubernetes clusters backed by public or private cloud infrastructure. Entire application stacks or even entire clusters become portable across clouds.
Cilium has several differentiators that set it apart from other networking and security solutions in the cloud native ecosystem, including:
- eBPF-based technology: Cilium leverages eBPF technology to provide deep visibility into network traffic and granular control over network connections.
- Micro-segmentation: Cilium enables micro-segmentation at the network level, allowing organizations to enforce policies that limit communication between different services or workloads.
- Encryption and authentication: Cilium provides encryption and authentication of all network traffic, ensuring that only authorized parties can access data and resources.
- Application-aware network security: Cilium provides network firewalling on L3-L7, with support for HTTP, gRPC, Kafka, and other protocols. This enables application-aware network security and protects against attacks that target specific applications or services.
- Observability: Cilium provides rich observability of Kubernetes and cloud-native infrastructure, allowing security teams to gain security-relevant observability and feed network activity into an SIEM (Security Information and Event Management) solution such as Splunk or Elastic.
As a part of a two-series blog; Part I (current tutorial) will do a deep dive into how to create an EKS-A cluster and upgrade it with Cilium OSS, and in Part II, you can see the benefits that Cilium provides via its rich feature set with Isovalent Enterprise for Cilium. You can read more about the announcement by reading Thomas Graf’s blog post and AWS EKS-A official documentation.
What is EKS-Anywhere in brief?
EKS Anywhere creates a Kubernetes cluster on-premises for a chosen provider. Supported providers include Bare Metal (via Tinkerbell), CloudStack, and vSphere. To manage that cluster, you can run cluster create and delete commands from an Ubuntu or Mac Administrative machine.
Creating a cluster involves downloading EKS Anywhere tools to an Administrative machine, and then running the eksctl anywhere create cluster
command to deploy that cluster to the provider. A temporary bootstrap cluster runs on the Administrative machine to direct the target cluster creation.
- EKS anywhere uses Amazon EKS Distro (EKS-D), a Kubernetes distribution customized and open-sourced by AWS. It is the same distro that powers the AWS-managed EKS. This means that when you install EKS anywhere, it has parameters and configurations optimized for AWS.
- Also, you can register the EKS anywhere clusters to the AWS EKS console using the EKS connector. Once the cluster is registered, you can visualize all the anywhere cluster components in the AWS EKS console.
- EKS connector is a Statefulset that runs the AWS System Manager Agent in your cluster. It is responsible for maintaining the connection between EKS anywhere cluster and AWS.
Common Question- What is the difference between EKS and EKS-Anywhere?
You can read more on the subtle differences between EKS-A and EKS, we will outline a few critical ones that are pertaining to this tutorial.
Amazon EKS-Anywhere | Amazon Elastic Kubernetes Service |
It is a new deployment option for Amazon EKS that enables you to easily create and operate Kubernetes clusters on-premises. EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support. | It is a managed Kubernetes service that makes it easy for you to run Kubernetes on the AWS cloud. Amazon EKS is certified Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS. |
Feature Availability
Cilium on AWS is a powerful networking and security solution for Kubernetes environments and is enabled by default in the eksa*
image but you can upgrade to either Cilium base edition or Enterprise Edition to unlock more features (see table below).
Note: EKS-A cluster infrastructure preparation (hardware and inventory management) is not in the scope of this document. This document assumes that you have already taken care of it before proceeding with creating a cluster on any of the provider types.
Step 1: Preparing the Administrative Machine
The Administrative machine (Admin machine) is required to run cluster lifecycle operations, but EKS Anywhere clusters do not require a continuously running Admin machine to function. During cluster creation, critical cluster artifacts including the kubeconfig file, SSH keys, and the full cluster specification yaml are saved to the Admin machine. These files are required when running any subsequent cluster lifecycle operations.
Administrative machine prerequisites
Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you will run Docker and add some binaries. From there, you create the cluster for your chosen provider.
- Docker 20.x.x
- Mac OS 10.15 / Ubuntu 20.04.2 LTS
- 4 CPU cores
- 16GB memory
- 30GB free disk space
- The administrative machine must be on the same Layer 2 network as the cluster machines (Bare Metal provider only).
- If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation.
- If you are using Ubuntu 21.10 or 22.04, you will need to switch from cgroups v2 to cgroups v1.
- If you are using Docker Desktop:
- For EKS Anywhere Bare Metal, Docker Desktop is not supported.
- For EKS Anywhere vSphere, if you are using Mac OS Docker Desktop 4.4.2 or newer
"deprecatedCgroupv1": true
must be set in~/Library/Group\Containers/group.com.docker/settings.json
EKS-A Cluster Prerequisites
The following prerequisites need to be taken into account before you proceed with this tutorial:
- IAM principal has been configured and has specific permissions.
- Curated packages- These are available to customers with the EKS-A enterprise subscription.
- For the purpose of this document, you don’t need to install these packages as cluster creation will succeed if authentication is not set up with some warnings which can be ignored.
- Firewall ports and Services that need to be allowed.
- If you are running Cilium in an environment that requires firewall rules to enable connectivity, you will have to add the respective firewall rules to ensure Cilium works properly.
Step 2: Installing the dependencies
Note: The administrative machine for this tutorial is based on Ubuntu 20.04.6
Docker
Install docker
eksctl
A command-line tool for working with EKS clusters that automates many individual tasks. For more information, see Installing or updating eksctl in the Amazon EKS user guide.
eksctl-anywhere
This will let you create a cluster in multiple providers for local development or production workloads.
kubectl
A command-line tool for working with Kubernetes clusters. For more information, see Installing or updating kubectl in the Amazon EKS user guide.
AWS CLI
A command-line tool for working with AWS services, including Amazon EKS. See Installing, updating, and uninstalling the AWS CLI in the AWS Command Line Interface User Guide
Helm
Helm helps you manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Make sure you have Helm 3 installed.
Step 3: Cluster creation on EKS-A
The command below creates a file named <$CLUSTER_NAME>.yaml
with the contents below in the path where it is executed. The configuration specification is divided into two sections:
- Cluster
- Docker DatacenterConfig
- The provider type chosen for this tutorial is
docker
which is a development-only version and not for production. You can choose from a list of providers and modify the commands accordingly.
Note
A sample CLUSTER_NAME.yaml
file
Create a cluster using the $CLUSTER_NAME.yaml
file from above
Step 4: Accessing the cluster
Once the cluster is created, access it with the generated KUBECONFIG file in your local directory.
Step 5: Validating the Default Cilium version
As outlined in the features section, EKS-A comes by default with Cilium as the CNI, and the image is suffixed with *-eksa.*
Step 6: Deploying a test workload (Optional)
EKS-A with eksa images and default Cilium has a limited set of features. You can create EKS-A test workloads and then check out some basic connectivity and network policy tests. The AWS examples in the documentation clearly explain how to get started. But as highlighted earlier, the default Cilium version that comes with EKS-Anywhere is limited. Let’s install the fully-featured Cilium and review some of the additional features that come with it.
Step 7: Upgrade to Cilium OSS
Many advanced features of Cilium are not yet enabled as part of EKS Anywhere, including Hubble observability, DNS-aware and HTTP-Aware Network Policy, Multi-cluster Routing, Transparent Encryption, and Advanced Load-balancing. You will upgrade the EKS-A cluster from the default image to Cilium.
Note: You can also upgrade to Cilium Enterprise, the Enterprise-grade, hardened solution that addresses complex security automation, role-based access control, and integration workflows with legacy infrastructure. You can contact our sales teams, sales@isovalent.com, and they can get you started with a demo and the next steps.
Install Cilium CLI
The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).
You can install Cilium CLI for Linux, macOS, or other distributions on their local machine(s) or server(s).
Install Hubble CLI
In order to access the observability data collected by Hubble, you can install the Hubble CLI. You can install Hubble CLI for Linux, macOS, or other distributions on their local machine (s) or server (s).
Install Cilium & Hubble
Set up Helm repository:
Deploy Cilium using helm:
What do these values signify:
Flags | Value(s) |
eni.enabled=false | We are not using the native AWS ENI datapath. |
ipam.mode=Kubernetes | The Kubernetes host-scope IPAM mode is enabled and delegates the address allocation to each individual node in the cluster. |
egressMasqueradeInterfaces | In order to limit the network interface on which masquerading should be performed, the option is used. |
tunnel=geneve/ vxlan | The encapsulation configuration for communication between nodes. |
hubble.metrics.enabled | Hubble metrics configuration |
hubble.relay.enabled | Enabling hubble relay service |
hubble.ui.enabled | Enabling the graphical service map |
Note- It is expected that the installation for Cilium might not go through and you will have to delete a few accounts, secrets, clusterrolebinding, and, clusterroles. This will be fixed in an upcoming release.
Validate the installation
To validate that Cilium has been properly installed with the correct version, run the following command cilium-status
and you can observe that Cilium is managing all the pods and they are in “Ready” state and are “Available”.
Cluster and Cilium Health Check
Check the status of the nodes and make sure they are in a “Ready” state
cilium-health
is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. Use cilium-health
to get visibility into the overall health of the cluster’s networking connectivity.
Cilium Connectivity Test
The cilium connectivity test
command deploys a series of services, deployments, and CiliumNetworkPolicy which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations.
Output Truncated:
Validate Hubble API access
To access the Hubble API, create a port forward to the Hubble service from your local machine or server. This will allow you to connect the Hubble client to the local port 4245 and access the Hubble Relay service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.
Validate that you have access to the Hubble API via the installed CLI and notice that both the nodes are connected and flows are being accounted for.
Run hubble observe
command in a different terminal against the local port to observe cluster-wide network events through Hubble Relay:
- In this case, a client app sends a “wget request” to a server every few seconds, and that transaction can be seen below.
Accessing the Hubble UI
In order to access the Hubble UI, create a port forward to the Hubble service from your local machine or server. This will allow you to connect to the local port 12000 and access the Hubble UI service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.
- This will redirect you to http://localhost:12000 in your browser.
- You should see a screen with an invitation to select a namespace, use the namespace selector dropdown on the left top corner to select a namespace:

Conclusion
Hopefully, this post gave you a good overview of how to install Cilium in EKS-Anywhere. In part II of this blog series, we will discuss more on the features you can enable with Cilium. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.
Try it Out
Suggested Reading

Amit Gupta is a Senior Technical Marketing Engineer at Isovalent that is powering eBPF cloud-native networking and security. Amit has 20+ years of experience in Networking, Telecommunications, Cloud, Security, and Open-Source and has worked in the past with companies like Motorola, Juniper, Avi Networks (acquired by VMware), and Prosimo. He is keen to learn and try out new technologies that aid in solving day-to-day problems for operators and customers.
He has worked in the Indian start-up ecosystem for a long time and helps new folks in that area in his time outside of work. Amit is an avid runner and cyclist and also spends a considerable amount of time helping kids in orphanages.