Back to blog

Cilium in EKS-Anywhere

Amit Gupta
Amit Gupta
Published: Updated: Cilium
Cilium in EKS-Anywhere

It wasn’t that long ago that AWS announced Cilium as the default Container Network Interface (CNI) for EKS-Anywhere (EKS-A). When you create an EKS-A cluster, you will automatically have Cilium installed and benefit from the powers of eBPF. However, an EKS-A cluster with the default Cilium image has a limited feature set. We can unlock the feature set for your EKS-A clusters by upgrading the embedded Cilium with either Cilium OSS or Cilium Enterprise. Let’s dive into it and take a look with a hands-on tutorial.

What are the benefits of Cilium in AWS?

When running in the context of AWS, Cilium can natively integrate with the cloud provider’s SDN (Software Defined Networking). Cilium can speak BGP, route traffic on the network, and represent existing network endpoints with cloud-native identities in an on-premises environment. To the application team using Kubernetes daily, the user experience will be the same regardless of whether the workload runs in Kubernetes clusters backed by public or private cloud infrastructure. Entire application stacks or even entire clusters become portable across clouds.

Cilium has several differentiators that set it apart from other networking and security solutions in the cloud native ecosystem, including:

  • eBPF-based technology: Cilium leverages eBPF technology to provide deep visibility into network traffic and granular control over network connections.
  • Micro-segmentation: Cilium enables micro-segmentation at the network level, allowing organizations to enforce policies that limit communication between different services or workloads.
  • Encryption and authentication: Cilium provides encryption and authentication of all network traffic, ensuring that only authorized parties can access data and resources.
  • Application-aware network security: Cilium provides network firewalling on L3-L7, supporting HTTP, gRPC, Kafka, and other protocols. This enables application-aware network security and protects against attacks that target specific applications or services.
  • Observability: Cilium provides rich observability of Kubernetes and cloud-native infrastructure, allowing security teams to gain security-relevant observability and feed network activity into an SIEM (Security Information and Event Management) solution such as Splunk or Elastic.

This is part of a two-part blog series. Part I (the current tutorial) will explore how to create an EKS-A cluster and upgrade it with Cilium OSS. In Part II, you will see the benefits that Cilium provides via its rich feature set with Isovalent Enterprise for Cilium. You can read more about the announcement by reading Thomas Graf’s blog post and AWS EKS-A official documentation

What is EKS-Anywhere in brief?

EKS Anywhere creates a Kubernetes cluster on-premises for a chosen provider. Supported providers include Bare Metal (via Tinkerbell), CloudStack, and vSphere. To manage that cluster, you can run cluster create and delete commands from an Ubuntu or Mac Administrative machine.

Creating a cluster involves downloading EKS Anywhere tools to an administrative machine and running the eksctl anywhere to create a cluster command to deploy that cluster to the provider. A temporary bootstrap cluster runs on the administrative machine to direct the creation of the target cluster.

  • EKS anywhere uses Amazon EKS Distro (EKS-D), a Kubernetes distribution customized and open-sourced by AWS. This distro powers the AWS-managed EKS, which means that when you install EKS anywhere, it has parameters and configurations optimized for AWS.
  • Also, you can register the EKS anywhere clusters to the AWS EKS console using the EKS connector. Once the cluster is registered, you can visualize all the anywhere cluster components in the AWS EKS console.
  • EKS connector is a Statefulset that runs your cluster’s AWS System Manager Agent. It is responsible for maintaining the connection between EKS anywhere cluster and AWS.

Common Question- What is the difference between EKS and EKS-Anywhere?

You can read more about the subtle differences between EKS-A and EKS. We will outline a few critical ones for this tutorial. 

Amazon EKS-AnywhereAmazon Elastic Kubernetes Service 
It is a managed Kubernetes service that makes it easy for you to run Kubernetes on the AWS cloud. Amazon EKS is certified as a Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS.It is a managed Kubernetes service that makes it easy to run Kubernetes on the AWS cloud. Amazon EKS is certified as a Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS.

Feature Availability

Cilium on AWS  is a powerful networking and security solution for Kubernetes environments and is enabled by default in the eksa* image, but you can upgrade to either Cilium base edition or Enterprise Edition to unlock more features (see table below).

Note- P=Partial indicates a subset of features under the respective feature would be available in the Enterprise version. For example, in the case of clustermesh, if you have a clustermesh scenario with Overlapping IP addresses, then the same would be available by upgrading to Isovalent Enterprise for Cilium. Basic clustermesh features would work with Cilium OSS.

Headline/ Feature
Cilium Embedded In AWS EKS-Anywhere
Cilium OSS
Isovalent Enterprise For Cilium
Network Routing (CNI)
Basic Network Policy (Labels and CIDR rules)
Advanced Network Policy (L7 rules)
Network Load-Balancing (L3/L4)
Service Mesh & L7 Load-Balancing (Ingress, GatewayAPI)
Multi-Cluster (Load-Balancing, Policy)
Transit Gateway & Non-Kubernetes Workloads
Encryption (IPsec, Wireguard, mutual authN)
Advanced Routing (BGP, Egress Gateway)
Basic Hubble Network Observability (Metrics, Logs, OpenTelemetry)
Advanced Hubble Network Observability (SIEM & Storage and Analytics & RBAC)
Tetragon Runtime Security (observability, enforcement, SIEM)
Enterprise-Hardened Cilium Distribution, Training, 24×7 Enterprise Grade Support
Partial

Note: EKS-A cluster infrastructure preparation (hardware and inventory management) is outside the scope of this document. It assumes you handled it before creating a cluster of any provider type.

Step 1: Preparing the Administrative Machine

The Administrative machine (Admin machine) is required to run cluster lifecycle operations, but EKS Anywhere clusters do not require a continuously running Admin machine. Critical cluster artifacts, including the kubeconfig file, SSH keys, and the full cluster specification yaml, are saved to the Admin machine during cluster creation. These files are required when running any subsequent cluster lifecycle operations. 

Administrative machine prerequisites

Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you will run Docker and add some binaries. From there, you create the cluster for your chosen provider.

  • Docker 20.x.x
  • Mac OS 10.15 / Ubuntu 22.04.4 LTS 
  • 4 CPU cores
  • 16GB memory
  • 30GB free disk space
  • The administrative machine must be on the same Layer 2 network as the cluster machines (Bare Metal provider only).
  • If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation.
  • If you use EKS Anywhere v0.15 or earlier and Ubuntu 21.10 or 22.04, you must switch from cgroups v2 to cgroups v1.
  • If you are using Docker Desktop:
    • For EKS Anywhere Bare Metal, Docker Desktop is not supported.
    • For EKS Anywhere vSphere, if you are using Mac OS Docker Desktop 4.4.2 or newer "deprecatedCgroupv1": true must be set in ~/Library/Group\Containers/group.com.docker/settings.json

EKS-A Cluster Prerequisites

The following prerequisites need to be taken into account before you proceed with this tutorial:

  • IAM principal has been configured and has specific permissions.
  • Curated packages- These are available to customers with the EKS-A enterprise subscription.
    • For this document, you don’t need to install these packages, as cluster creation will succeed if authentication is not set up with some warnings that can be ignored.
  • Firewall ports and services need to be allowed.
  • If you run Cilium in an environment requiring firewall rules to enable connectivity, you must add the respective firewall rules to ensure Cilium works properly.
    • Inbound Rules
  • Outbound Rules

Step 2: Installing the dependencies 

Note: The administrative machine for this tutorial is based on Ubuntu 22.04.4

Docker

Install docker

#Run the following command to uninstall all conflicting packages:
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

#To install the latest version, run:
 sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

yq

A lightweight and portable command-line YAML, JSON, and XML processor. yq uses jq like syntax but works with yaml files and json, xml, properties, csv, and tsv. 

wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq
chmod +x /usr/bin/yq

eksctl

A command-line tool for working with EKS clusters that automates many individual tasks. See Installing or updating eksctl in the Amazon EKS user guide.

curl "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C /tmp
sudo install -m 0755 /tmp/eksctl /usr/local/bin/eksctl

eksctl-anywhere

This will let you create a cluster in multiple providers for local development or production workloads.

RELEASE_VERSION=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.latestVersion")
EKS_ANYWHERE_TARBALL_URL=$(curl https://anywhere-assets.eks.amazonaws.com/releases/eks-a/manifest.yaml --silent --location | yq ".spec.releases[] | select(.version==\"$RELEASE_VERSION\").eksABinary.$(uname -s | tr A-Z a-z).uri")
curl $EKS_ANYWHERE_TARBALL_URL \
    --silent --location \
    | tar xz ./eksctl-anywhere
sudo install -m 0755 ./eksctl-anywhere /usr/local/bin/eksctl-anywhere

kubectl

A command-line tool for working with Kubernetes clusters. See Installing or updating kubectl in the Amazon EKS user guide.

curl -LO https://dl.k8s.io/release/v1.29.0/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

AWS CLI

A command-line tool for working with AWS services, including Amazon EKS. See Installing, updating, and uninstalling the AWS CLI in the AWS Command Line Interface User Guide. 

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install

Helm

Helm helps you manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Make sure you have Helm 3 installed.

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Step 3: Cluster creation on EKS-A

The command below creates a file named <$CLUSTER_NAME>.yaml with the contents below in the path where it is executed. The configuration specification is divided into two sections:

  • Cluster
  • Docker DatacenterConfig
  • The provider type chosen for this tutorial is docker which is a development-only version and not for production. You can choose from a list of providers and modify the commands accordingly.
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
   --provider docker > $CLUSTER_NAME.yaml

Note

A sample CLUSTER_NAME.yaml file

kind: Cluster
metadata:
  name: mgmt
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 1
  datacenterRef:
    kind: DockerDatacenterConfig
    name: mgmt
  externalEtcdConfiguration:
    count: 1
  kubernetesVersion: "1.29"
  managementCluster:
    name: mgmt
  workerNodeGroupConfigurations:
  - count: 2
    name: md-0

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
  name: mgmt
spec: {}

---

Create a cluster using the $CLUSTER_NAME.yaml file from above

Note- It’s advisable to run this command using a utility or process running in the background. Since the bootstrapping of the cluster takes time and the SSH could time out, you can use a utility-like screen in Linux. If your SSH were to be lost, the screen would still be running, and you can always come back to the screen terminal via a new SSH-based login.

eksctl anywhere create cluster -f $CLUSTER_NAME.yaml

Using the new workflow using the controller for management cluster create
Performing setup and validations
Warning: The docker infrastructure provider is meant for local development and testing only
✅ Docker Provider setup is valid
✅ Validate OS is compatible with registry mirror configuration
✅ Validate certificate for registry mirror
✅ Validate authentication for git provider
✅ Validate cluster's eksaVersion matches EKS-A version
Creating new bootstrap cluster
Provider specific pre-capi-install-setup on bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific post-setup
Installing EKS-A custom components on bootstrap cluster
Installing EKS-D components
Installing EKS-A custom components (CRD and controller)
Creating new workload cluster
Creating EKS-A namespace
Installing cluster-api providers on workload cluster
Installing EKS-A secrets on workload cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components on workload cluster
Installing EKS-D components
Installing EKS-A custom components (CRD and controller)
Moving cluster spec to workload cluster
Installing GitOps Toolkit on workload cluster
GitOps field not specified, bootstrap flux skipped
Writing cluster config file
Deleting bootstrap cluster
🎉 Cluster created!
--------------------------------------------------------------------------------------
The Amazon EKS Anywhere Curated Packages are only available to customers with the
Amazon EKS Anywhere Enterprise Subscription
--------------------------------------------------------------------------------------
Enabling curated packages on the cluster
Installing helm chart on cluster        {"chart": "eks-anywhere-packages", "version": "0.4.3-eks-a-68"}

Step 4: Accessing the cluster

Once the cluster is created, access it using the generated KUBECONFIG file in your local directory.

export KUBECONFIG=/home/ubuntu/mgmt/mgmt-eks-a-cluster.kubeconfig

Step 5: Validating the Default Cilium version

As outlined in the features section, EKS-A comes by default with Cilium as the CNI, and the image is suffixed with *-eksa.*

kubectl -n kube-system exec ds/cilium -- cilium version
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Client: 1.12.11-eksa.1 a71bd065 2023-06-20T22:57:43+00:00 go version go1.18.10 linux/arm64
Daemon: 1.12.11-eksa.1 a71bd065 2023-06-20T22:57:43+00:00 go version go1.18.10 linux/arm64

Step 6: Deploying a test workload (Optional)

EKS-A with eksa images and default Cilium has a limited set of features. You can create EKS-A test workloads and check out basic connectivity and network policy tests. The AWS examples in the documentation clearly explain how to get started. But as highlighted earlier, the default Cilium version with EKS-Anywhere is limited. Let’s install the fully-featured Cilium and review some of the additional features that come with it.

Step 7: Upgrade to Cilium OSS

Many advanced features of Cilium are not yet enabled as part of EKS Anywhere, including Hubble observability, DNS-aware and HTTP-Aware Network Policy, Multi-cluster Routing, Transparent Encryption, and Advanced Load-balancing. You will upgrade the EKS-A cluster from the default image to Cilium.
Note: You can also upgrade to Cilium Enterprise, an Enterprise-grade, hardened solution that addresses complex security automation, role-based access control, and integration workflows with legacy infrastructure. Contact our sales teams at sales@isovalent.com, and they can provide a demo and the next steps.

Install Cilium CLI

The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g., clustermesh, Hubble).

You can install Cilium CLI for Linux, macOS, or other distributions on their local machine(s) or server(s).

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

Install Hubble CLI

To access the observability data collected by Hubble, you can install the Hubble CLI. You can install Hubble CLI for Linux, macOS, or other distributions on their local machine (s) or server (s).

export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
HUBBLE_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}

Install Cilium & Hubble

Set up Helm repository:

helm repo add cilium https://helm.cilium.io/

Deploy Cilium using helm:

helm install cilium cilium/cilium --version 1.14.10 \
  --namespace kube-system \
  --set eni.enabled=false \
  --set ipam.mode=kubernetes \
  --set egressMasqueradeInterfaces=eth0 \
  --set tunnel=geneve \
  --set hubble.metrics.enabled="{dns,drop,tcp,flow,icmp,http}" \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true 

What do these values signify:

Flags Value(s)
eni.enabled=falseWe are not using the native AWS ENI datapath.
ipam.mode=KubernetesThe option sets the IPAM mode to Kubernetes.
egressMasqueradeInterfacesThe option is used to limit the network interface on which masquerading should be performed.
tunnel=geneveThe encapsulation option for communication between nodes.
hubble.metrics.enabledHubble metrics configuration
hubble.relay.enabledEnabling hubble relay service
hubble.ui.enabledEnabling the graphical service map

Note- It is expected that the installation for Cilium OSS might not go through. You can achieve the same in two ways (choose either of the two options):

  • spec.clusterNetwork.cniConfig.cilium.skipUpgrade is set to true at either cluster creation or you can also upgrade your EKS cluster.
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: cilium
spec:
  clusterNetwork:
    cniConfig:
      cilium:
        skipUpgrade: true
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 1
  datacenterRef:
    kind: DockerDatacenterConfig
    name: cilium
  externalEtcdConfiguration:
    count: 1
  kubernetesVersion: "1.29"
  managementCluster:
    name: cilium
  workerNodeGroupConfigurations:
  - count: 2
    name: md-0

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
  name: cilium
spec: {}

---
  • All EKS-A clusters are deployed with the base edition of Cilium, which will need to be uninstalled before upgrading to Cilium OSS.
kubectl delete serviceaccount cilium --namespace kube-system
kubectl delete serviceaccount cilium-operator --namespace kube-system
kubectl delete secret hubble-ca-secret --namespace kube-system
kubectl delete secret hubble-server-certs --namespace kube-system
kubectl delete configmap cilium-config --namespace kube-system
kubectl delete clusterrole cilium
kubectl delete clusterrolebinding cilium
kubectl delete clusterrolebinding cilium-operator
kubectl delete secret cilium-ca --namespace kube-system
kubectl delete service hubble-peer --namespace kube-system
kubectl delete service cilium-agent --namespace kube-system
kubectl delete daemonset cilium --namespace kube-system
kubectl delete deployment cilium-operator --namespace kube-system
kubectl delete clusterrole cilium-operator
kubectl delete role  cilium-config-agent -n kube-system
kubectl delete rolebinding cilium-config-agent -n kube-system

Validate the installation

To validate that Cilium has been properly installed with the correct version, run the following command cilium-status , and you will see that Cilium is managing all the pods. They are in “Ready” state and are “Available”.

cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

Deployment        hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Containers:       hubble-relay       Running: 1
                  cilium             Running: 2
                  hubble-ui          Running: 1
                  cilium-operator    Running: 2
Cluster Pods:     20/20 managed by Cilium
Image versions    hubble-relay       quay.io/cilium/hubble-relay:v1.14.0@sha256:da96840b638d3e9705cfc48af2bddfe92d17eb4f5a776b075bef9ac50efbb042: 1
                  cilium             quay.io/cilium/cilium:v1.14.0@sha256:994b8b3b26d8a1ef74b51a163daa1ac02aceb9b16f794f8120f15a12011739dc: 2
                  hubble-ui          quay.io/cilium/hubble-ui-backend:v0.11.0@sha256:14c04d11f78da5c363f88592abae8d2ecee3cbe009f443ef11df6ac5f692d839: 1
                  hubble-ui          quay.io/cilium/hubble-ui:v0.11.0@sha256:bcb369c47cada2d4257d63d3749f7f87c91dde32e010b223597306de95d1ecc8: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.14.0@sha256:753c1d0549032da83ec45333feec6f4b283331618a1f7fed2f7e2d36efbd4bc9: 2

Cluster and Cilium Health Check

Check the status of the nodes and make sure they are in a “Ready” state.

kubectl get nodes -o wide
NAME                                      STATUS   ROLES           AGE     VERSION               INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
mgmt-j5dbj (localhost)                         Ready    control-plane   6h12m   v1.27.4-eks-cedffd4   172.18.0.5    <none>        Amazon Linux 2023   5.15.49-linuxkit   containerd://1.6.19
mgmt-md-0-68884d88b9-vc8rb   Ready    <none>          6h12m   v1.27.4-eks-cedffd4   172.18.0.6    <none>        Amazon Linux 2023   5.15.49-linuxkit   containerd://1.7.2

cilium-health is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. Use cilium-health to get visibility into the overall health of the cluster’s networking connectivity.

kubectl -n kube-system exec ds/cilium -- cilium-health status
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2023-04-04T04:33:54Z
Nodes:
  mgmt-j5dbj (localhost):
    Host connectivity to 172.18.0.5:
      ICMP to stack:   OK, RTT=848.391µs
      HTTP to agent:   OK, RTT=180.512µs
    Endpoint connectivity to 192.168.0.114:
      ICMP to stack:   OK, RTT=917.135µs
      HTTP to agent:   OK, RTT=282.271µs
  mgmt-md-0-68884d88b9-vc8rb:
    Host connectivity to 172.18.0.6:
      ICMP to stack:   OK, RTT=896.943µs
      HTTP to agent:   OK, RTT=347.153µs
    Endpoint connectivity to 192.168.1.234:
      ICMP to stack:   OK, RTT=889.663µs
      HTTP to agent:   OK, RTT=590.454µs

Cilium Connectivity Test

The cilium connectivity test command deploys a series of services and deployments, and CiliumNetworkPolicy will use various connectivity paths to connect. Connectivity paths include with and without service load-balancing and various network policy combinations.

Output Truncated:

cilium connectivity test 

ℹ️  Monitor aggregation detected, will skip some flow validation steps
[cilium-eksa] Creating namespace cilium-test for connectivity check...
[cilium-eksa] Deploying echo-same-node service...
[cilium-eksa] Deploying DNS test server configmap...
[cilium-eksa] Deploying same-node deployment...
[cilium-eksa] Deploying client deployment...
[cilium-eksa] Deploying client2 deployment...
🔭 Enabling Hubble telescope...
ℹ️  Expose Relay locally with:
   cilium hubble enable
   cilium hubble port-forward&
ℹ️  Cilium version: 1.14.0

✅ All 42 tests (191 actions) successful, 12 tests skipped, 1 scenarios skipped.

Validate Hubble API access

To access the Hubble API, create a port forward to the Hubble service from your local machine or server. This will allow you to connect the Hubble client to the local port 4245 and access the Hubble Relay service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.

kubectl port-forward -n kube-system svc/hubble-relay 4245:80

Validate that you have access to the Hubble API via the installed CLI and notice that both the nodes are connected and flows are being accounted for.

hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8,190/8,190 (100.00%)
Flows/s: 38.51
Connected Nodes: 2/2

Run hubble observe command in a different terminal against the local port to observe cluster-wide network events through Hubble Relay:

  • In this case, a client app sends a “wget request” to a server every few seconds, and that transaction can be seen below.
hubble observe --server localhost:4245 --follow

Sep  7 09:11:51.915: 192.168.0.33 (ID:64881) -> 192.168.2.200 (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: 192.168.0.33 (ID:64881) -> 192.168.2.200 (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: default/client:35552 (ID:458) -> default/server:80 (ID:2562) to-stack FORWARDED (TCP Flags: SYN)
Sep  7 09:11:51.915: 192.168.2.200 (ID:458) -> 192.168.1.12 (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: 192.168.0.33 (ID:64881) -> 192.168.2.200 (host) to-stack FORWARDED (IPv4)
Sep  7 09:11:51.916: 192.168.1.12 (ID:2562) -> 192.168.2.200 (host) to-stack FORWARDED (IPv4)
Sep  7 09:11:51.917: 192.168.2.200 (ID:458) -> 192.168.1.12 (host) to-stack FORWARDED (IPv4)

Accessing the Hubble UI

To access the Hubble UI, create a port forward to the Hubble service from your local machine or server. This will allow you to connect to the local port 12000 and access the Hubble UI service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.

kubectl port-forward -n kube-system svc/hubble-ui 12000:80
  • This will redirect you to http://localhost:12000 in your browser.
  • You should see a screen with an invitation to select a namespace; use the namespace selector dropdown on the left top corner to select a namespace:

Conclusion

Hopefully, this post gave you a good overview of how to install Cilium in  EKS-Anywhere. Part II of this blog series will discuss the features you can enable with Cilium. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.

Try it Out

Suggested Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Industry insights you won’t delete. Delivered to your inbox weekly.