Back to blog

Cilium in EKS-Anywhere

Amit Gupta
Amit Gupta
Published: Updated: Cilium
Cilium in EKS-Anywhere

It wasn’t that far back when AWS announced Cilium as the default Container Network Interface (CNI) for EKS-Anywhere (EKS-A). As you create an EKS-A cluster, you will automatically have Cilium installed and benefit from the powers of eBPF. However, an EKS-A cluster with the default Cilium image has a limited feature set. We can unlock the feature set for your EKS-A clusters by upgrading the embedded Cilium with either Cilium OSS or Cilium Enterprise. Let’s dive into it and take a look with a hands-on tutorial.

What are the benefits of Cilium in AWS?

When running in the context of AWS, Cilium can natively integrate with the cloud provider’s SDN (Software Defined Networking). Cilium can speak BGP, route traffic on the network, and represent existing network endpoints with cloud-native identities in an on-premises environment. To the application team using Kubernetes daily, the user experience will be the same regardless of whether the workload runs in Kubernetes clusters backed by public or private cloud infrastructure. Entire application stacks or even entire clusters become portable across clouds.

Cilium has several differentiators that set it apart from other networking and security solutions in the cloud native ecosystem, including:

  • eBPF-based technology: Cilium leverages eBPF technology to provide deep visibility into network traffic and granular control over network connections.
  • Micro-segmentation: Cilium enables micro-segmentation at the network level, allowing organizations to enforce policies that limit communication between different services or workloads.
  • Encryption and authentication: Cilium provides encryption and authentication of all network traffic, ensuring that only authorized parties can access data and resources.
  • Application-aware network security: Cilium provides network firewalling on L3-L7, with support for HTTP, gRPC, Kafka, and other protocols. This enables application-aware network security and protects against attacks that target specific applications or services.
  • Observability: Cilium provides rich observability of Kubernetes and cloud-native infrastructure, allowing security teams to gain security-relevant observability and feed network activity into an SIEM (Security Information and Event Management) solution such as Splunk or Elastic.

As a part of a two-series blog; Part I (current tutorial) will do a deep dive into how to create an EKS-A cluster and upgrade it with Cilium OSS, and in Part II, you can see the benefits that Cilium provides via its rich feature set with Isovalent Enterprise for Cilium. You can read more about the announcement by reading Thomas Graf’s blog post and AWS EKS-A official documentation

What is EKS-Anywhere in brief?

EKS Anywhere creates a Kubernetes cluster on-premises for a chosen provider. Supported providers include Bare Metal (via Tinkerbell), CloudStack, and vSphere. To manage that cluster, you can run cluster create and delete commands from an Ubuntu or Mac Administrative machine.

Creating a cluster involves downloading EKS Anywhere tools to an Administrative machine, and then running the eksctl anywhere create cluster command to deploy that cluster to the provider. A temporary bootstrap cluster runs on the Administrative machine to direct the target cluster creation.

  • EKS anywhere uses Amazon EKS Distro (EKS-D), a Kubernetes distribution customized and open-sourced by AWS. It is the same distro that powers the AWS-managed EKS. This means that when you install EKS anywhere, it has parameters and configurations optimized for AWS.
  • Also, you can register the EKS anywhere clusters to the AWS EKS console using the EKS connector. Once the cluster is registered, you can visualize all the anywhere cluster components in the AWS EKS console.
  • EKS connector is a Statefulset that runs the AWS System Manager Agent in your cluster. It is responsible for maintaining the connection between EKS anywhere cluster and AWS.

Common Question- What is the difference between EKS and EKS-Anywhere?

You can read more on the subtle differences between EKS-A and EKS, we will outline a few critical ones that are pertaining to this tutorial. 

Amazon EKS-AnywhereAmazon Elastic Kubernetes Service 
It is a new deployment option for Amazon EKS that enables you to easily create and operate Kubernetes clusters on-premises. EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support.It is a managed Kubernetes service that makes it easy for you to run Kubernetes on the AWS cloud. Amazon EKS is certified Kubernetes conformant, so existing applications that run on upstream Kubernetes are compatible with Amazon EKS.

Feature Availability

Cilium on AWS  is a powerful networking and security solution for Kubernetes environments and is enabled by default in the eksa* image but you can upgrade to either Cilium base edition or Enterprise Edition to unlock more features (see table below).

Note- P=Partial indicates a subset of features under the respective feature would be available in the Enterprise version. e.g. in the case of clustermesh if you have a clustermesh scenario with Overlapping IP addresses then the same would be available by upgrading to Isovalent Enterprise for Cilium. Basic clustermesh features would work with Cilium OSS.

Headline/ Feature
Cilium Embedded In AWS EKS-Anywhere
Cilium OSS
Isovalent Enterprise For Cilium
Network Routing (CNI)
Basic Network Policy (Labels and CIDR rules)
Advanced Network Policy (L7 rules)
Network Load-Balancing (L3/L4)
Service Mesh & L7 Load-Balancing (Ingress, GatewayAPI)
Multi-Cluster (Load-Balancing, Policy)
Transit Gateway & Non-Kubernetes Workloads
Encryption (IPsec, Wireguard, mutual authN)
Advanced Routing (BGP, Egress Gateway)
Basic Hubble Network Observability (Metrics, Logs, OpenTelemetry)
Advanced Hubble Network Observability (SIEM & Storage and Analytics & RBAC)
Tetragon Runtime Security (observability, enforcement, SIEM)
Enterprise-Hardened Cilium Distribution, Training, 24×7 Enterprise Grade Support
Partial

Note: EKS-A cluster infrastructure preparation (hardware and inventory management) is not in the scope of this document. This document assumes that you have already taken care of it before proceeding with creating a cluster on any of the provider types.

Step 1: Preparing the Administrative Machine

The Administrative machine (Admin machine) is required to run cluster lifecycle operations, but EKS Anywhere clusters do not require a continuously running Admin machine to function. During cluster creation, critical cluster artifacts including the kubeconfig file, SSH keys, and the full cluster specification yaml are saved to the Admin machine. These files are required when running any subsequent cluster lifecycle operations. 

Administrative machine prerequisites

Creating an EKS Anywhere cluster begins with setting up an Administrative machine where you will run Docker and add some binaries. From there, you create the cluster for your chosen provider.

  • Docker 20.x.x
  • Mac OS 10.15 / Ubuntu 20.04.2 LTS 
  • 4 CPU cores
  • 16GB memory
  • 30GB free disk space
  • The administrative machine must be on the same Layer 2 network as the cluster machines (Bare Metal provider only).
  • If you are using Ubuntu, use the Docker CE installation instructions to install Docker and not the Snap installation.
  • If you are using Ubuntu 21.10 or 22.04, you will need to switch from cgroups v2 to cgroups v1.
  • If you are using Docker Desktop:
    • For EKS Anywhere Bare Metal, Docker Desktop is not supported.
    • For EKS Anywhere vSphere, if you are using Mac OS Docker Desktop 4.4.2 or newer "deprecatedCgroupv1": true must be set in ~/Library/Group\Containers/group.com.docker/settings.json

EKS-A Cluster Prerequisites

The following prerequisites need to be taken into account before you proceed with this tutorial:

  • IAM principal has been configured and has specific permissions.
  • Curated packages- These are available to customers with the EKS-A enterprise subscription.
    • For this document, you don’t need to install these packages as cluster creation will succeed if authentication is not set up with some warnings which can be ignored.
  • Firewall ports and Services that need to be allowed.
  • If you are running Cilium in an environment that requires firewall rules to enable connectivity, you will have to add the respective firewall rules to ensure Cilium works properly.

Step 2: Installing the dependencies 

Note: The administrative machine for this tutorial is based on Ubuntu 20.04.6

Docker

Install docker

sudo apt-get remove docker docker-engine docker.io containerd runc
sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

eksctl

A command-line tool for working with EKS clusters that automates many individual tasks. For more information, see Installing or updating eksctl in the Amazon EKS user guide.

curl "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" \
    --silent --location \
    | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin/

eksctl-anywhere

This will let you create a cluster in multiple providers for local development or production workloads.

export EKSA_RELEASE="0.14.3" OS="$(uname -s | tr A-Z a-z)" RELEASE_NUMBER=30
curl "https://anywhere-assets.eks.amazonaws.com/releases/eks-a/${RELEASE_NUMBER}/artifacts/eks-a/v${EKSA_RELEASE}/${OS}/amd64/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz" \
    --silent --location \
    | tar xz ./eksctl-anywhere
sudo mv ./eksctl-anywhere /usr/local/bin/

kubectl

A command-line tool for working with Kubernetes clusters. For more information, see Installing or updating kubectl in the Amazon EKS user guide.

export OS="$(uname -s | tr A-Z a-z)" ARCH=$(test "$(uname -m)" = 'x86_64' && echo 'amd64' || echo 'arm64')
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/${OS}/${ARCH}/kubectl"
sudo mv ./kubectl /usr/local/bin
sudo chmod +x /usr/local/bin/kubectl

AWS CLI

A command-line tool for working with AWS services, including Amazon EKS. See Installing, updating, and uninstalling the AWS CLI in the AWS Command Line Interface User Guide 

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install

Helm

Helm helps you manage Kubernetes applications. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Make sure you have Helm 3 installed.

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Step 3: Cluster creation on EKS-A

The command below creates a file named <$CLUSTER_NAME>.yaml with the contents below in the path where it is executed. The configuration specification is divided into two sections:

  • Cluster
  • Docker DatacenterConfig
  • The provider type chosen for this tutorial is docker which is a development-only version and not for production. You can choose from a list of providers and modify the commands accordingly.
CLUSTER_NAME=mgmt
eksctl anywhere generate clusterconfig $CLUSTER_NAME \
   --provider docker > $CLUSTER_NAME.yaml

Note

A sample CLUSTER_NAME.yaml file

apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: Cluster
metadata:
  name: mgmt
spec:
  clusterNetwork:
    cniConfig:
      cilium: {}
    pods:
      cidrBlocks:
      - 192.168.0.0/16
    services:
      cidrBlocks:
      - 10.96.0.0/12
  controlPlaneConfiguration:
    count: 1
  datacenterRef:
    kind: DockerDatacenterConfig
    name: mgmt
  externalEtcdConfiguration:
    count: 1
  kubernetesVersion: "1.25"
  managementCluster:
    name: mgmt
  workerNodeGroupConfigurations:
  - count: 2
    name: md-0

---
apiVersion: anywhere.eks.amazonaws.com/v1alpha1
kind: DockerDatacenterConfig
metadata:
  name: mgmt
spec: {}

---

Create a cluster using the $CLUSTER_NAME.yaml file from above

eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
Performing setup and validations
✅ validation succeeded {"validation": "docker Provider setup is valid"}
Creating new bootstrap cluster
Installing cluster-api providers on bootstrap cluster
Provider specific setup
Creating new workload cluster
Installing networking on workload cluster
Installing cluster-api providers on workload cluster
Moving cluster management from bootstrap to workload cluster
Installing EKS-A custom components (CRD and controller) on workload cluster
Creating EKS-A CRDs instances on workload cluster
Installing GitOps Toolkit on workload cluster
GitOps field not specified, bootstrap flux skipped
Deleting bootstrap cluster
🎉 Cluster created!
----------------------------------------------------------------------------------
The Amazon EKS Anywhere Curated Packages are only available to customers with the
Amazon EKS Anywhere Enterprise Subscription
----------------------------------------------------------------------------------

Step 4: Accessing the cluster

Once the cluster is created, access it with the generated KUBECONFIG file in your local directory.

export KUBECONFIG=/home/ubuntu/mgmt/mgmt-eks-a-cluster.kubeconfig

Step 5: Validating the Default Cilium version

As outlined in the features section, EKS-A comes by default with Cilium as the CNI, and the image is suffixed with *-eksa.*

kubectl -n kube-system exec ds/cilium -- cilium version
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Client: 1.12.11-eksa.1 a71bd065 2023-06-20T22:57:43+00:00 go version go1.18.10 linux/arm64
Daemon: 1.12.11-eksa.1 a71bd065 2023-06-20T22:57:43+00:00 go version go1.18.10 linux/arm64

Step 6: Deploying a test workload (Optional)

EKS-A with eksa images and default Cilium has a limited set of features. You can create EKS-A test workloads and then check out some basic connectivity and network policy tests. The AWS examples in the documentation clearly explain how to get started. But as highlighted earlier, the default Cilium version that comes with EKS-Anywhere is limited. Let’s install the fully-featured Cilium and review some of the additional features that come with it.

Step 7: Upgrade to Cilium OSS

Many advanced features of Cilium are not yet enabled as part of EKS Anywhere, including Hubble observability, DNS-aware and HTTP-Aware Network Policy, Multi-cluster Routing, Transparent Encryption, and Advanced Load-balancing. You will upgrade the EKS-A cluster from the default image to Cilium.
Note: You can also upgrade to Cilium Enterprise, the Enterprise-grade, hardened solution that addresses complex security automation, role-based access control, and integration workflows with legacy infrastructure. You can contact our sales teams, sales@isovalent.com, and they can get you started with a demo and the next steps.

Install Cilium CLI

The Cilium CLI can be used to install Cilium, inspect the state of a Cilium installation, and enable/disable various features (e.g. clustermesh, Hubble).

You can install Cilium CLI for Linux, macOS, or other distributions on their local machine(s) or server(s).

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

Install Hubble CLI

In order to access the observability data collected by Hubble, you can install the Hubble CLI. You can install Hubble CLI for Linux, macOS, or other distributions on their local machine (s) or server (s).

export HUBBLE_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/hubble/master/stable.txt)
HUBBLE_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then HUBBLE_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/hubble/releases/download/$HUBBLE_VERSION/hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}
sha256sum --check hubble-linux-${HUBBLE_ARCH}.tar.gz.sha256sum
sudo tar xzvfC hubble-linux-${HUBBLE_ARCH}.tar.gz /usr/local/bin
rm hubble-linux-${HUBBLE_ARCH}.tar.gz{,.sha256sum}

Install Cilium & Hubble

Set up Helm repository:

helm repo add cilium https://helm.cilium.io/

Deploy Cilium using helm:

helm install cilium cilium/cilium --version 1.14.0 \
  --namespace kube-system \
  --set eni.enabled=false \
  --set ipam.mode=kubernetes \
  --set egressMasqueradeInterfaces=eth0 \
  --set tunnel=geneve \
  --set hubble.metrics.enabled="{dns,drop,tcp,flow,icmp,http}" \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true 

What do these values signify:

Flags Value(s)
eni.enabled=falseWe are not using the native AWS ENI datapath.
ipam.mode=KubernetesThe Kubernetes host-scope IPAM mode is enabled and delegates the address allocation to each individual node in the cluster.
egressMasqueradeInterfacesIn order to limit the network interface on which masquerading should be performed, the option is used.
tunnel=geneve/ vxlanThe encapsulation configuration for communication between nodes.
hubble.metrics.enabledHubble metrics configuration
hubble.relay.enabledEnabling hubble relay service
hubble.ui.enabledEnabling the graphical service map

Note- It is expected that the installation for Cilium might not go through and you will have to delete a few accounts, secrets, clusterrolebinding, and, clusterroles. This will be fixed in an upcoming release.

kubectl delete serviceaccount cilium --namespace kube-system
kubectl delete serviceaccount cilium-operator --namespace kube-system
kubectl delete secret hubble-ca-secret --namespace kube-system
kubectl delete secret hubble-server-certs --namespace kube-system
kubectl delete configmap cilium-config --namespace kube-system
kubectl delete clusterrole cilium
kubectl delete clusterrolebinding cilium
kubectl delete clusterrolebinding cilium-operator
kubectl delete secret cilium-ca --namespace kube-system
kubectl delete service hubble-peer --namespace kube-system
kubectl delete daemonset cilium --namespace kube-system
kubectl delete deployment cilium-operator --namespace kube-system
kubectl delete clusterrole cilium-operator

Validate the installation

To validate that Cilium has been properly installed with the correct version, run the following command cilium-status and you can observe that Cilium is managing all the pods and they are in “Ready” state and are “Available”.

cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       OK
    \__/       ClusterMesh:        disabled

Deployment        hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet         cilium             Desired: 2, Ready: 2/2, Available: 2/2
Deployment        hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Containers:       hubble-relay       Running: 1
                  cilium             Running: 2
                  hubble-ui          Running: 1
                  cilium-operator    Running: 2
Cluster Pods:     20/20 managed by Cilium
Image versions    hubble-relay       quay.io/cilium/hubble-relay:v1.14.0@sha256:da96840b638d3e9705cfc48af2bddfe92d17eb4f5a776b075bef9ac50efbb042: 1
                  cilium             quay.io/cilium/cilium:v1.14.0@sha256:994b8b3b26d8a1ef74b51a163daa1ac02aceb9b16f794f8120f15a12011739dc: 2
                  hubble-ui          quay.io/cilium/hubble-ui-backend:v0.11.0@sha256:14c04d11f78da5c363f88592abae8d2ecee3cbe009f443ef11df6ac5f692d839: 1
                  hubble-ui          quay.io/cilium/hubble-ui:v0.11.0@sha256:bcb369c47cada2d4257d63d3749f7f87c91dde32e010b223597306de95d1ecc8: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.14.0@sha256:753c1d0549032da83ec45333feec6f4b283331618a1f7fed2f7e2d36efbd4bc9: 2

Cluster and Cilium Health Check

Check the status of the nodes and make sure they are in a “Ready” state

kubectl get nodes -o wide
NAME                                      STATUS   ROLES           AGE     VERSION               INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION     CONTAINER-RUNTIME
mgmt-j5dbj (localhost)                         Ready    control-plane   6h12m   v1.27.4-eks-cedffd4   172.18.0.5    <none>        Amazon Linux 2023   5.15.49-linuxkit   containerd://1.6.19
mgmt-md-0-68884d88b9-vc8rb   Ready    <none>          6h12m   v1.27.4-eks-cedffd4   172.18.0.6    <none>        Amazon Linux 2023   5.15.49-linuxkit   containerd://1.7.2

cilium-health is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. Use cilium-health to get visibility into the overall health of the cluster’s networking connectivity.

kubectl -n kube-system exec ds/cilium -- cilium-health status
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2023-04-04T04:33:54Z
Nodes:
  mgmt-j5dbj (localhost):
    Host connectivity to 172.18.0.5:
      ICMP to stack:   OK, RTT=848.391µs
      HTTP to agent:   OK, RTT=180.512µs
    Endpoint connectivity to 192.168.0.114:
      ICMP to stack:   OK, RTT=917.135µs
      HTTP to agent:   OK, RTT=282.271µs
  mgmt-md-0-68884d88b9-vc8rb:
    Host connectivity to 172.18.0.6:
      ICMP to stack:   OK, RTT=896.943µs
      HTTP to agent:   OK, RTT=347.153µs
    Endpoint connectivity to 192.168.1.234:
      ICMP to stack:   OK, RTT=889.663µs
      HTTP to agent:   OK, RTT=590.454µs

Cilium Connectivity Test

The cilium connectivity test command deploys a series of services, deployments, and CiliumNetworkPolicy which will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations.

Output Truncated:

cilium connectivity test 

ℹ️  Monitor aggregation detected, will skip some flow validation steps
[cilium-eksa] Creating namespace cilium-test for connectivity check...
[cilium-eksa] Deploying echo-same-node service...
[cilium-eksa] Deploying DNS test server configmap...
[cilium-eksa] Deploying same-node deployment...
[cilium-eksa] Deploying client deployment...
[cilium-eksa] Deploying client2 deployment...
🔭 Enabling Hubble telescope...
ℹ️  Expose Relay locally with:
   cilium hubble enable
   cilium hubble port-forward&
ℹ️  Cilium version: 1.14.0

✅ All 42 tests (191 actions) successful, 12 tests skipped, 1 scenarios skipped.

Validate Hubble API access

To access the Hubble API, create a port forward to the Hubble service from your local machine or server. This will allow you to connect the Hubble client to the local port 4245 and access the Hubble Relay service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.

kubectl port-forward -n kube-system svc/hubble-relay 4245:80

Validate that you have access to the Hubble API via the installed CLI and notice that both the nodes are connected and flows are being accounted for.

hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8,190/8,190 (100.00%)
Flows/s: 38.51
Connected Nodes: 2/2

Run hubble observe command in a different terminal against the local port to observe cluster-wide network events through Hubble Relay:

  • In this case, a client app sends a “wget request” to a server every few seconds, and that transaction can be seen below.
hubble observe --server localhost:4245 --follow

Sep  7 09:11:51.915: 192.168.0.33 (ID:64881) -> 192.168.2.200 (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: 192.168.0.33 (ID:64881) -> 192.168.2.200 (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: default/client:35552 (ID:458) -> default/server:80 (ID:2562) to-stack FORWARDED (TCP Flags: SYN)
Sep  7 09:11:51.915: 192.168.2.200 (ID:458) -> 192.168.1.12 (remote-node) to-overlay FORWARDED (IPv4)
Sep  7 09:11:51.915: 192.168.0.33 (ID:64881) -> 192.168.2.200 (host) to-stack FORWARDED (IPv4)
Sep  7 09:11:51.916: 192.168.1.12 (ID:2562) -> 192.168.2.200 (host) to-stack FORWARDED (IPv4)
Sep  7 09:11:51.917: 192.168.2.200 (ID:458) -> 192.168.1.12 (host) to-stack FORWARDED (IPv4)

Accessing the Hubble UI

In order to access the Hubble UI, create a port forward to the Hubble service from your local machine or server. This will allow you to connect to the local port 12000 and access the Hubble UI service in your Kubernetes cluster. For more information on this method, see Use Port Forwarding to Access Application in a Cluster.

kubectl port-forward -n kube-system svc/hubble-ui 12000:80
  • This will redirect you to http://localhost:12000 in your browser.
  • You should see a screen with an invitation to select a namespace, use the namespace selector dropdown on the left top corner to select a namespace:

Conclusion

Hopefully, this post gave you a good overview of how to install Cilium in  EKS-Anywhere. In part II of this blog series, we will discuss more on the features you can enable with Cilium. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.

Try it Out

Suggested Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Related

Isovalent Enterprise for Cilium on EKS & EKS-A in AWS Marketplace

Isovalent Enterprise for Cilium is now available in the AWS marketplace.

Isovalent Enterprise for Cilium on EKS & EKS-A in AWS Marketplace
Amit Gupta

Industry insights you won’t delete. Delivered to your inbox weekly.