Secure & Scalable Connectivity

High-performance eBPF-based network connectivity
with built-in security and optimal scale.

Cilium leverages eBPF, the powerful new Linux kernel technology, to build high-performance, cloud native-aware networking, observability, and security. Going well beyond what is possible with traditional Linux networking like iptables, Cilium enables zero-trust network security via powerful Kubernetes-identity and DNS aware network policies and enables distributed cloud native applications via high-performance service-aware load-balancing. Cilium runs within large-scale, highly-dynamic Kubernetes clusters as a highly efficient Container Network Interface (CNI) plugin and its Cluster Mesh capability enables cross-cluster connectivity that is seamless, efficient, and secure. Non-Kubernetes workloads can be included into a Cilium network mesh via Cilium's VM/bare-metal integration. Cilium Enterprise includes a hardened distribution of Cilium and comes with 24/7 enterprise-grade support.

Cilium Enterprise Capabilities


Highly Scalable Kubernetes CNI

Cilium is a cloud-native centric implementation of IPv4 and IPv6 Kubernetes networking & security, and implements the Container Network Interface (CNI) standard to easily integrate with any existing Kubernetes cluster.

Cilium supports both overlay and direct routing models, and has native integration with cloud provider networking for AWS, GCP, and Azure.

Cilium’s control and data plane has been built from the ground up for large-scale and highly dynamic cloud native environments where 100s and even 1000s of containers are created and destroyed within seconds. Cilium’s control plane is highly optimized, running in Kubernetes clusters of up to 5K nodes and 100K pods, and Cilium’s data plane uses eBPF for efficient load-balancing and incremental updates, avoiding the pitfalls of iptables-based CNI plugins.

Zero-trust Network Policy

Compliance requirements dictate network isolation between tenant workloads within a cluster and restricted access to external workloads, but traditional IP-based firewalls cannot implement such restrictions.

Cilium’s eBPF-powered datapath natively understands cloud native identity, implementing not only basic Kubernetes Network Policy (e.g. Label + CIDR matching) but also supports DNS-aware network policies (e.g. allow to *.google.com), which dramatically simplifies defining zero-trust policies for accessing services outside of the Kubernetes cluster.

Additionally, Cilium supports L7 policies (e.g. allow HTTP GET /foo) for fine-grained access control to shared API services running common cloud native protocols like HTTP, gRPC, Kafka, etc.

Cilium also supports deny-based network policies, cluster-wide network policy, and host-layer firewalling.

High Performance Load-balancing

Service-based load-balancing is a core network function in Kubernetes, but using kube-proxy for load-balancing is hamstrung by the limitations of iptables.

Cilium leverages eBPF for high-performance L3/L4 load-balancing. For pod-to-pod service load-balancing, Cilium performs this operation at the socket-layer, besting packet-based load-balancing solutions like iptables or IPVS. For load-balancing of connections inbound to the cluster (e.g. type LoadBalancer/NodePort services and ExternalIPs), Cilium leverages XDP for NIC-hardware accelerated forwarding and supports Direct-Server-Return (DSR), both of which provide significant latency improvements and reduced server load. Cilium leverages the advanced Maglev algorithm for high-performance consistent hashing, at high scale.

Multi-cluster

With standard Kubernetes networking, each cluster is an island, requiring proxies to connect workloads that run in different clusters for the purposes of migration, disaster-recovery, or geographic locality. However, not only do proxies add complexity (as another point in your infrastructure that may fail or come under heavy load), but they also present a security challenge since internal services must be “exposed” to the entire remote cluster due to the lack of shared identity between clusters.

Cilium Cluster Mesh creates a single zone of connectivity for load-balancing, observability, and security enforcement between nodes from multiple Kubernetes clusters. This connectivity is high performance, as data flows directly from one worker node to another with no intermediate proxies. Cluster mesh also preserves workload identity for cross-cluster traffic, meaning that network visibility tooling, such as Hubble and network security policies, continue to work just as they do within a single cluster.

Transparent Encryption

Securing data in flight is an increasingly important requirement in security sensitive environments.

Cilium’s transparent encryption capabilities use the highly efficient IPsec capabilities built into the Linux kernel to automatically encrypt communications between all workloads within, or between, Kubernetes clusters.

This mechanism is simple: it requires only a single configuration setting in Cilium and no application changes. It is also highly efficient, with no side-car or other application layer proxying required.

External VM/Bare-metal Workloads

When an enterprise adopts Kubernetes, they rarely have the luxury to immediately migrate all workloads to the new cloud native world. This leaves VM and bare-metal workloads without the benefits of service-aware load-balancing, identity-aware visibility, zero-trust network policy, and transparent encryption.

Cilium can add Linux VM and bare-metal workloads running outside your Kubernetes cluster into your cloud native connectivity, observability, and security mesh. A Cilium agent runs as a standalone agent on the Linux workload and uses the same identity-aware eBPF-powered datapath used by the Cilium CNI implementation.