Back to blog

EKS & Isovalent Enterprise for Cilium – Reducing Operational Complexity

Amit Gupta
Amit Gupta
Published: Updated: Isovalent
EKS & Isovalent Enterprise for Cilium

Kubernetes doesn’t provide a network interface system by default. Instead, network plugins provide this functionality. AWS suggests several alternate CNI plugins with Cilium and Isovalent Enterprise for Cilium as popular alternatives. Until recently, EKS clusters would boot with the default networking add-ons and would require users to take some additional steps to install these alternative plug-ins. With the recently announced ability to create an EKS cluster minus the default networking add-on (AWS VPC CNI), you can install CNI plugins like Isovalent Enterprise for Cilium with zero effort. This article shows how to initially deploy an EKS cluster without a preinstalled CNI plugin and then add Isovalent Enterprise for Cilium as the CNI plugin.

Why should I disable the add-ons?

Every EKS cluster automatically comes with default networking add-ons, including Amazon VPC CNI, CoreDNS, and Kube-Proxy, providing critical functionality that enables pod and service operations for EKS clusters. Many EKS users often choose to use Cilium and Isovalent Enterprise for Cilium to remove Kube-Proxy and instead use the full powers of eBPF. However, installing Cilium or any 3rd party networking plugin would have required you to add taints to your nodegroups or patch the aws daemonset (see below).

As you will see in the tutorial below, disabling the add-ons at bootup greatly simplifies the installation and operation of Cilium and Isovalent Enterprise for Cilium.

kubectl -n kube-system patch daemonset aws-node --type='strategic' -p='{"spec":{"template":{"spec":{"nodeSelector":{"io.cilium/aws-node-enabled":"true"}}}}}'
taints:
   - key: "node.cilium.io/agent-not-ready"
     value: "true"
     effect: "NoExecute"

What is Isovalent Enterprise for Cilium?

Isovalent Enterprise for Cilium is an enterprise-grade, hardened distribution of open-source projects CiliumHubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.

Why Isovalent Enterprise for Cilium on EKS?

For enterprise customers requiring support and usage of Advanced NetworkingSecurity, and Observability features, “Isovalent Enterprise for Cilium” is recommended with the following benefits:

  • Advanced network policy: Isovalent Enterprise for Cilium provides advanced network policy capabilities, including DNS-aware policy, L7 policy, and deny policy. These capabilities enable fine-grained control over network traffic for micro-segmentation and improved security.
  • Hubble flow observability + User Interface: Hubble observability feature provides real-time network traffic flow, policy visualization, and a powerful User Interface for easy troubleshooting and network management.
  • Multi-cluster connectivity via Cluster Mesh: Isovalent Enterprise for Cilium provides seamless networking and security across multiple clouds, including public cloud providers like AWS, Azure, and Google Cloud Platform, as well as on-premises environments.
  • Advanced Security Capabilities via Tetragon: Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats. Built on eBPF, Tetragon can easily scale to meet the needs of the most demanding cloud-native environments.
  • Service Mesh: Isovalent Enterprise for Cilium provides sidecar-free, seamless service-to-service communication and advanced load balancing, making it easy to deploy and manage complex microservices architectures.
  • Enterprise-grade support: Isovalent Enterprise for Cilium includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that issues are resolved promptly and efficiently. Additionally, professional services help organizations deploy and manage Cilium in production environments.

Prerequisites

The following prerequisites need to be taken into account before you proceed with this tutorial:

  • Access to AWS. Create a new account for free.
  • The Cilium operator requires the following EC2 privileges to perform ENI creation and IP allocation.
  • Ensure you have a cluster IAM role if you’re going to create your cluster with eksctl.
  • To ensure Cilium works properly in an environment that requires firewall rules to enable connectivity, you must add the respective firewall rules.
  • Install kubectl
  • Install Helm
  • Install eksctl (make sure the version is 0.186.0 and higher)
  • Install awscli
  • Cilium CLI (optional): Cilium Enterprise provides a Cilium CLI tool that automatically collects all the logs and debug information needed to troubleshoot your Cilium Enterprise installation. You can install Cilium CLI for Linux, macOS, or other distributions on their local machine(s) or server(s).

How can I disable the add-ons?

Every EKS cluster automatically comes with default networking add-ons (see below) that provide critical functionality that enables pod and service operations for EKS clusters.

  • Amazon VPC CNI
  • CoreDNS, and
  • Kube-Proxy

You now have the option to create an EKS cluster where you can skip these add-ons during cluster creation. This is possible in the following ways:

This tutorial will refer to the eksctl method of creating the EKS cluster minus these add-ons by creating an EKS cluster, installing Isovalent Enterprise for Cilium, and then adding a managed node group.

Points to consider

You might think, why can’t you create an EKS cluster with a managed group in one go? As you can see below, if you try to create an EKS cluster without the add-ons and try to add a managed group during cluster creation, that is not supported, as the cluster is being created without AWS-VPC-CNI.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: cluster1
  region: ap-south-1
  version: "1.29"

iam:
  withOIDC: true

addonsConfig:
  disableDefaultAddons: true
addons:
  - name: coredns

managedNodeGroups:
  - name: byocni
    instanceType: t2.medium
    desiredCapacity: 2
    privateNetworking: true
eksctl create cluster -f ./default-ipv4-no-addons-managed.yaml
Error: fields nodeGroups, managedNodeGroups, fargateProfiles, karpenter, gitops, iam.serviceAccounts, and iam.podIdentityAssociations are not supported during cluster creation in a cluster without VPC CNI; please remove these fields and add them back after cluster creation is successful

How do you create an EKS cluster (with no add-ons)?

  • As listed in the pre-requisites section, you need to ensure that the eksctl version is 0.186.0 or higher.
eksctl version
0.187.0
  • Create a ClusterConfig file to create an EKS cluster (e.g., default-ipv4-no-addons.yaml)
    • For this tutorial, we will be disabling
      • Kube-Proxy
      • AWS-VPC-CNI
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: cluster1
  region: ap-south-1
  version: "1.29"

iam:
  withOIDC: true

addonsConfig:
  disableDefaultAddons: true
addons:
  - name: coredns
  • Create the cluster
eksctl create cluster -f ./default-ipv4-no-addons.yaml
  • Check the status of the Pods.
    • The cluster would be running just the core-dns. Pods are pending, which is fine, as no CNI runs on the cluster.
kubectl get pods -A -o wide

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE    IP                NODE                                             NOMINATED NODE   READINESS GATES
kube-system   coredns-5d5f56f475-2qn7m           1/1     Pending   0          4d3h   192.168.147.55    ip-192-168-132-209.ap-south-1.compute.internal   <none>           <none>
kube-system   coredns-5d5f56f475-qq6sf           1/1     Pending   0          4d3h   192.168.150.163   ip-192-168-132-209.ap-south-1.compute.internal   <none>           <none>

How can I deploy Isovalent for Cilium as the CNI on the EKS cluster?

  • With the EKS cluster up and running, you can install Isovalent Enterprise for Cilium via two methods:
  • Once the Helm details have been obtained, ensure you have handy the EKS cluster Kubernetes API URL and port details. Specifying this is necessary as the EKS cluster is created explicitly without setting up Kube-Proxy. Consequently, no component is currently doing the cluster’s internal L4 load balancing.
    • These details can be retrieved via kubectl.
kubectl cluster-info

Kubernetes control plane is running at https://##############################.gr7.ap-south-1.eks.amazonaws.com
CoreDNS is running at https://###############################.gr7.ap-south-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  • The Cilium agent needs to be made aware of this information with the following configuration values when installing Cilium:
eni.enabled=true
ipam.mode=eni
egressMasqueradeInterfaces=eth0
routingMode=native
k8sServiceHost=######################.gr7.ap-south-1.eks.amazonaws.com
k8sServicePort=443 
kubeProxyReplacement=true
  • Check the status of the Pods.
    • Notice that no Kube-Proxy Pods have been created anymore.
kubectl get pods -A -o wide

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE     IP       NODE     NOMINATED NODE   READINESS GATES
kube-system   cilium-operator-5879bbf5fd-bczbt   0/1     Pending   0          117s    <none>   <none>   <none>           <none>
kube-system   cilium-operator-5879bbf5fd-gxk75   0/1     Pending   0          117s    <none>   <none>   <none>           <none>
kube-system   coredns-5d5f56f475-j8mzc           0/1     Pending   0          5m46s   <none>   <none>   <none>           <none>
kube-system   coredns-5d5f56f475-z596q           0/1     Pending   0          5m46s   <none>   <none>   <none>           <none>
  • We are adding a node group for this tutorial after the EKS cluster creation.

Note- You no longer have to add any taints for Cilium, which was required previously to ensure application Pods will only be scheduled once Cilium is ready to manage them.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: cluster1
  region: ap-south-1

managedNodeGroups:
  - name: byocni
    instanceType: t2.medium
    desiredCapacity: 2
    privateNetworking: true
  • Create the node group.
eksctl create nodegroup -f ./nodegroup.yaml
  • Check the status of the nodes and make sure they are in a “Ready” state.
kubectl get nodes -o wide

NAME                                             STATUS   ROLES    AGE     VERSION               INTERNAL-IP       EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                  CONTAINER-RUNTIME
ip-192-168-132-209.ap-south-1.compute.internal   Ready    <none>   4d20h   v1.29.3-eks-ae9a62a   192.168.132.209   <none>        Amazon Linux 2   5.10.219-208.866.amzn2.x86_64   containerd://1.7.11
ip-192-168-185-135.ap-south-1.compute.internal   Ready    <none>   4d20h   v1.29.3-eks-ae9a62a   192.168.185.135   <none>        Amazon Linux 2   5.10.219-208.866.amzn2.x86_64   containerd://1.7.11
  • Check the status of the Pods.
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE   IP                NODE NOMINATED NODE   READINESS GATES
kube-system   cilium-9htmt                       1/1     Running   0          47s   192.168.168.146   ip-192-168-168-146.ap-south-1.compute.internal   <none>           <none>
kube-system   cilium-f67v4                       1/1     Running   0          10m   192.168.126.79    ip-192-168-126-79.ap-south-1.compute.internal    <none>           <none>
kube-system   cilium-operator-5879bbf5fd-gxk75   1/1     Running   0          13m   192.168.126.79    ip-192-168-126-79.ap-south-1.compute.internal    <none>           <none>
kube-system   cilium-operator-5879bbf5fd-h825g   1/1     Running   0          20s   192.168.168.146   ip-192-168-168-146.ap-south-1.compute.internal   <none>           <none>
kube-system   coredns-5d5f56f475-j8mzc           1/1     Running   0          17m   192.168.117.78    ip-192-168-126-79.ap-south-1.compute.internal    <none>           <none>
kube-system   coredns-5d5f56f475-z596q           1/1     Running   0          17m   192.168.119.181   ip-192-168-126-79.ap-south-1.compute.internal    <none>           <none>
  • Check if any AWS daemonsets are running on the EKS cluster.
kubectl get ds -A

NAMESPACE     NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                AGE
kube-system   cilium                  2         2         2       2            2           kubernetes.io/os=linux       25m
  • Validate Cilium version
kubectl -n kube-system exec ds/cilium -- cilium status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.29+ (v1.29.6-eks-db838b0) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    True   [eth0   192.168.132.209 fe80::31:acff:fec6:506d (Direct Routing), eth1   fe80::af:11ff:fe04:b5b 192.168.142.32, eth2   fe80::a0:54ff:fe88:43ed 192.168.147.201]
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            none
CNI Config file:         successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                  Ok   1.15.6-cee.1 (v1.15.6-cee.1-ec1edf7a)
NodeMonitor:             Listening for events on 15 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok
IPAM:                    IPv4: 6/14 allocated,
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       41/41 healthy
Proxy Status:            OK, ip 192.168.153.181, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 18.08   Metrics: Disabled
Encryption:              Disabled
Cluster health:          2/2 reachable   (2024-07-18T10:52:53Z)
Modules Health:          Stopped(0) Degraded(0) OK(11)
  • Validate health check
    • cilium-health is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking and connectivity. You can check node-to-node health with cilium-health status:
kubectl -n kube-system exec ds/cilium -- cilium-health status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2024-07-23T07:08:00Z
Nodes:
  ip-192-168-132-209.ap-south-1.compute.internal (localhost):
    Host connectivity to 192.168.132.209:
      ICMP to stack:   OK, RTT=327.826µs
      HTTP to agent:   OK, RTT=184.132µs
    Endpoint connectivity to 192.168.152.161:
      ICMP to stack:   OK, RTT=335.942µs
      HTTP to agent:   OK, RTT=369.218µs
  ip-192-168-185-135.ap-south-1.compute.internal:
    Host connectivity to 192.168.185.135:
      ICMP to stack:   OK, RTT=1.168448ms
      HTTP to agent:   OK, RTT=962.031µs
    Endpoint connectivity to 192.168.182.152:
      ICMP to stack:   OK, RTT=1.164016ms
      HTTP to agent:   OK, RTT=1.313351ms
  • Cilium Connectivity Test (Optional)
    • The Cilium connectivity test deploys a series of services and deployments, and CiliumNetworkPolicy will use various connectivity paths to connect. Connectivity paths include with and without service load-balancing and various network policy combinations.
cilium connectivity test

ℹ️  Monitor aggregation detected, will skip some flow validation steps
[cluster1.ap-south-1.eksctl.io] Creating namespace cilium-test for connectivity check...
[cluster1.ap-south-1.eksctl.io] Deploying echo-same-node service...
[cluster1.ap-south-1.eksctl.io] Deploying DNS test server configmap...
[cluster1.ap-south-1.eksctl.io] Deploying same-node deployment...
[cluster1.ap-south-1.eksctl.io] Deploying client deployment...
[cluster1.ap-south-1.eksctl.io] Deploying client2 deployment...
[cluster1.ap-south-1.eksctl.io] Deploying client3 deployment...
[cluster1.ap-south-1.eksctl.io] Deploying echo-other-node service...
[cluster1.ap-south-1.eksctl.io] Deploying other-node deployment...
[host-netns] Deploying cluster1.ap-south-1.eksctl.io daemonset...
[host-netns-non-cilium] Deploying cluster1.ap-south-1.eksctl.io daemonset...
ℹ️  Skipping tests that require a node Without Cilium
[cluster1.ap-south-1.eksctl.io] Waiting for deployment cilium-test/client to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for deployment cilium-test/client2 to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for deployment cilium-test/echo-same-node to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for deployment cilium-test/client3 to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for deployment cilium-test/echo-other-node to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client-d48766cfd-v69r6 to reach DNS server on cilium-test/echo-same-node-7f896b84-6h7hv pod...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client2-6b89df6c77-chd7n to reach DNS server on cilium-test/echo-same-node-7f896b84-6h7hv pod...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client3-7f986c467b-z4n5d to reach DNS server on cilium-test/echo-same-node-7f896b84-6h7hv pod...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client-d48766cfd-v69r6 to reach DNS server on cilium-test/echo-other-node-58999bbffd-mz5f8 pod...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client2-6b89df6c77-chd7n to reach DNS server on cilium-test/echo-other-node-58999bbffd-mz5f8 pod...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client3-7f986c467b-z4n5d to reach DNS server on cilium-test/echo-other-node-58999bbffd-mz5f8 pod...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client-d48766cfd-v69r6 to reach default/kubernetes service...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client2-6b89df6c77-chd7n to reach default/kubernetes service...
[cluster1.ap-south-1.eksctl.io] Waiting for pod cilium-test/client3-7f986c467b-z4n5d to reach default/kubernetes service...
[cluster1.ap-south-1.eksctl.io] Waiting for Service cilium-test/echo-other-node to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for Service cilium-test/echo-other-node to be synchronized by Cilium pod kube-system/cilium-cxpgr
[cluster1.ap-south-1.eksctl.io] Waiting for Service cilium-test/echo-other-node to be synchronized by Cilium pod kube-system/cilium-zq2vs
[cluster1.ap-south-1.eksctl.io] Waiting for Service cilium-test/echo-same-node to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for Service cilium-test/echo-same-node to be synchronized by Cilium pod kube-system/cilium-cxpgr
[cluster1.ap-south-1.eksctl.io] Waiting for Service cilium-test/echo-same-node to be synchronized by Cilium pod kube-system/cilium-zq2vs
[cluster1.ap-south-1.eksctl.io] Waiting for NodePort 192.168.185.135:30304 (cilium-test/echo-other-node) to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for NodePort 192.168.185.135:30495 (cilium-test/echo-same-node) to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for NodePort 192.168.132.209:30495 (cilium-test/echo-same-node) to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for NodePort 192.168.132.209:30304 (cilium-test/echo-other-node) to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for DaemonSet cilium-test/host-netns-non-cilium to become ready...
[cluster1.ap-south-1.eksctl.io] Waiting for DaemonSet cilium-test/host-netns to become ready...
ℹ️  Skipping IPCache check
🔭 Enabling Hubble telescope...
⚠️  Unable to contact Hubble Relay, disabling Hubble telescope and flow validation: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:4245: connect: connection refused"
ℹ️  Expose Relay locally with:
   cilium hubble enable
   cilium hubble port-forward&
ℹ️  Cilium version: 1.15.6
🏃[cilium-test] Running 82 tests ...
[=] [cilium-test] Test [no-unexpected-packet-drops] [1/82]
..
[=] [cilium-test] Test [no-policies] [2/82]
.................................................
[=] [cilium-test] Test [no-policies-extra] [4/82]
............
[=] [cilium-test] Skipping test [no-policies-from-outside] [3/82] (skipped by condition)
[=] [cilium-test] Test [allow-all-except-world] [5/82]
........................
[=] [cilium-test] Test [client-ingress] [6/82]
......
[=] [cilium-test] Test [client-ingress-knp] [7/82]
......
[=] [cilium-test] Test [allow-all-with-metrics-check] [8/82]
......
[=] [cilium-test] Test [all-ingress-deny] [9/82]
............
[=] [cilium-test] Skipping test [all-ingress-deny-from-outside] [10/82] (skipped by condition)
[=] [cilium-test] Test [all-ingress-deny-knp] [11/82]
............
[=] [cilium-test] Test [all-egress-deny] [12/82]
........................
[=] [cilium-test] Test [all-egress-deny-knp] [13/82]
........................
[=] [cilium-test] Test [all-entities-deny] [14/82]
............
[=] [cilium-test] Test [cluster-entity] [15/82]
...
[=] [cilium-test] Skipping test [cluster-entity-multi-cluster] [16/82] (skipped by condition)
[=] [cilium-test] Test [host-entity-egress] [17/82]
......
[=] [cilium-test] Test [host-entity-ingress] [18/82]
....
[=] [cilium-test] Test [echo-ingress] [19/82]
......
[=] [cilium-test] Skipping test [echo-ingress-from-outside] [20/82] (skipped by condition)
[=] [cilium-test] Test [echo-ingress-knp] [21/82]
......
[=] [cilium-test] Test [client-ingress-icmp] [22/82]
......
[=] [cilium-test] Test [client-egress] [23/82]
......
[=] [cilium-test] Test [client-egress-knp] [24/82]
......
[=] [cilium-test] Test [client-egress-expression] [25/82]
......
[=] [cilium-test] Test [client-egress-expression-knp] [26/82]
......
[=] [cilium-test] Test [client-with-service-account-egress-to-echo] [27/82]
......
[=] [cilium-test] Test [client-egress-to-echo-service-account] [28/82]
......
[=] [cilium-test] Test [to-entities-world] [29/82]
.........
[=] [cilium-test] Test [to-cidr-external] [30/82]
......
[=] [cilium-test] Test [to-cidr-external-knp] [31/82]
......
[=] [cilium-test] Skipping test [from-cidr-host-netns] [32/82] (skipped by condition)
[=] [cilium-test] Test [echo-ingress-from-other-client-deny] [33/82]
..........
[=] [cilium-test] Test [client-ingress-from-other-client-icmp-deny] [34/82]
............
[=] [cilium-test] Test [client-egress-to-echo-deny] [35/82]
............
[=] [cilium-test] Test [client-ingress-to-echo-named-port-deny] [36/82]
....
[=] [cilium-test] Test [client-egress-to-echo-expression-deny] [37/82]
....
[=] [cilium-test] Test [client-with-service-account-egress-to-echo-deny] [38/82]
....
[=] [cilium-test] Test [client-egress-to-echo-service-account-deny] [39/82]
..
[=] [cilium-test] Test [client-egress-to-cidr-deny] [40/82]
......
[=] [cilium-test] Test [client-egress-to-cidr-deny-default] [41/82]
......
[=] [cilium-test] Skipping test [clustermesh-endpointslice-sync] [42/82] (skipped by condition)
[=] [cilium-test] Test [health] [43/82]
..
[=] [cilium-test] Skipping test [north-south-loadbalancing] [44/82] (Feature node-without-cilium is disabled)
[=] [cilium-test] Test [pod-to-pod-encryption] [45/82]
.
[=] [cilium-test] Skipping test [pod-to-pod-with-l7-policy-encryption] [46/82] (requires Feature encryption-pod mode wireguard, got disabled)
[=] [cilium-test] Test [node-to-node-encryption] [47/82]
...
[=] [cilium-test] Skipping test [egress-gateway] [48/82] (skipped by condition)
[=] [cilium-test] Skipping test [egress-gateway-excluded-cidrs] [49/82] (Feature enable-ipv4-egress-gateway is disabled)
[=] [cilium-test] Skipping test [egress-gateway-with-l7-policy] [50/82] (skipped by condition)
[=] [cilium-test] Skipping test [pod-to-node-cidrpolicy] [51/82] (Feature cidr-match-nodes is disabled)
[=] [cilium-test] Skipping test [north-south-loadbalancing-with-l7-policy] [52/82] (Feature node-without-cilium is disabled)
[=] [cilium-test] Test [echo-ingress-l7] [53/82]
..................
[=] [cilium-test] Test [echo-ingress-l7-named-port] [54/82]
..................
[=] [cilium-test] Test [client-egress-l7-method] [55/82]
..................
[=] [cilium-test] Test [client-egress-l7] [56/82]
...............
[=] [cilium-test] Test [client-egress-l7-named-port] [57/82]
...............
[=] [cilium-test] Skipping test [client-egress-l7-tls-deny-without-headers] [58/82] (Feature secret-backend-k8s is disabled)
[=] [cilium-test] Skipping test [client-egress-l7-tls-headers] [59/82] (Feature secret-backend-k8s is disabled)
[=] [cilium-test] Test [dns-only] [72/82]
...............
[=] [cilium-test] Skipping test [pod-to-ingress-service-deny-ingress-identity] [65/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Skipping test [client-egress-l7-set-header] [60/82] (Feature secret-backend-k8s is disabled)
[=] [cilium-test] Skipping test [echo-ingress-auth-always-fail] [61/82] (Feature mutual-auth-spiffe is disabled)
[=] [cilium-test] Skipping test [echo-ingress-mutual-auth-spiffe] [62/82] (Feature mutual-auth-spiffe is disabled)
[=] [cilium-test] Skipping test [pod-to-ingress-service] [63/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Skipping test [pod-to-ingress-service-deny-all] [64/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Skipping test [outside-to-ingress-service] [68/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Skipping test [pod-to-ingress-service-deny-backend-service] [66/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Skipping test [pod-to-ingress-service-allow-ingress-identity] [67/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Skipping test [outside-to-ingress-service-deny-cidr] [70/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Skipping test [outside-to-ingress-service-deny-world-identity] [69/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Skipping test [outside-to-ingress-service-deny-all-ingress] [71/82] (Feature ingress-controller is disabled)
[=] [cilium-test] Test [to-fqdns] [73/82]
............
[=] [cilium-test] Skipping test [pod-to-controlplane-host] [74/82] (skipped by condition)
[=] [cilium-test] Skipping test [pod-to-k8s-on-controlplane] [75/82] (skipped by condition)
[=] [cilium-test] Skipping test [pod-to-controlplane-host-cidr] [76/82] (skipped by condition)
[=] [cilium-test] Skipping test [pod-to-k8s-on-controlplane-cidr] [77/82] (skipped by condition)
[=] [cilium-test] Skipping test [local-redirect-policy] [78/82] (Feature enable-local-redirect-policy is disabled)
[=] [cilium-test] Test [pod-to-pod-no-frag] [79/82]
.
[=] [cilium-test] Test [check-log-errors] [82/82]
................
[cilium-test] All 48 tests (471 actions) successful, 34 tests skipped, 0 scenarios skipped.

[=] [cilium-test] Skipping test [host-firewall-ingress] [80/82] (skipped by condition)
[=] [cilium-test] Skipping test [host-firewall-egress] [81/82] (skipped by condition)

Conclusion

Hopefully, this post gave you a good overview of deploying an EKS cluster without a preinstalled CNI plugin and then adding Isovalent Enterprise for Cilium as the CNI plugin.

You can schedule a demo with our experts if you’d like to learn more.

Try it out

Choose your way to explore Cilium with our Cilium learning tracks focusing on features important to engineers using Cilium in cloud environments. Cilium comes in different flavors, whether using GKE, AKS, or EKS; not all these features will apply to each managed Kubernetes Service. However, it should give you a good idea of features relevant to operating Cilium in cloud environments.

Suggested Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Related

Blogs

Isovalent Enterprise for Cilium on EKS & EKS-A in AWS Marketplace

Isovalent Enterprise for Cilium is now available in the AWS marketplace.

By
Amit Gupta
Blogs

Enabling Enterprise Features for Cilium in Elastic Kubernetes Service (EKS)

In this tutorial, you will learn how to enable Enterprise features in an Elastic Kubernetes Service (EKS) cluster running Isovalent Enterprise for Cilium.

By
Amit Gupta
Blogs

Cilium in EKS-Anywhere

This tutorial will do a deep dive into how to bring up an EKS-A cluster then upgrading the embedded Cilium with either Cilium OSS or Cilium Enterprise to unlock more features

By
Amit Gupta

Industry insights you won’t delete. Delivered to your inbox weekly.