Back to blog

Cilium 1.16 – High-Performance Networking With Netkit, Gateway API Gamma Support, BGPV2 and More!

Nico Vibert
Nico Vibert
Published: Updated: Cilium
Cilium 1.16 – High-Performance Networking With Netkit, Gateway API Gamma Support, BGPV2 and More!

Cilium 1.16 has landed! The theme of the release is “Faster, Stronger, Smarter” – faster for the blazing performances you will get with netkit, the new virtual network device, stronger for all the security and operational improvements, such as Network Policies Port Range support and smarter for all the new traffic engineering features such as Kubernetes Service Traffic Distribution, Local Redirect Policy and a 5x reduction in tail latency for DNS policies! 

Cilium is the third fastest-growing project in the CNCF, a testament to the thriving community of over 750 developers. This release includes contributions from engineers at organizations such as DaoCloud, Datadog, Equinix, Google, Microsoft, Seznam, Spectro Cloud, Starling Bank, and many more. Our thanks to them all.

Let’s review the major new additions in this release. For a comprehensive look at all the new features, bug fixes and changes, check the Cilium 1.16 release notes. Remember to read the upgrade guide notes before proceeding.

Cilium 1.16 – New features at a glance

The latest open source release of Cilium includes all these new features and improvements.

Networking

  • Cilium netkit: container-network throughput and latency as fast as host-network (more details)
  • Service traffic distribution: Kubernetes 1.30 Service Traffic Distribution is the successor to topology-aware routing (more details)
  • Local Redirect Policy promoted to Stable: redirecting the traffic bound for services to the local backend, such as node-local DNS (more details)
  • BGPv2: Fresh new API for Cilium’s BGP feature (more details)
  • BGP ClusterIP advertisement: BGP advertisements of ExternalIP and Cluster IP Services (more details)
  • Node IPAM Service LB: Ability to assign IP addresses from the nodes themselves to Kubernetes services, providing alternative access to services from outside of the cluster (more details)
  • Multicast datapath: Cilium 1.16 supports a multicast datapath, enabling Kubernetes users to deploy applications and benefit from multicast’s efficiencies (more details)
  • Per-pod fixed MAC address: addressing use-cases such as software that is licenced based on a known MAC address (more details)

Service mesh & Ingress/Gateway API

  • Gateway API GAMMA support: East-west traffic management for the cluster via Gateway API (more details)
  • Gateway API 1.1 support: Cilium now supports Gateway API 1.1 (more details)
  • Gateway API support for more protocol options: Cilium Gateway API supports new protocol options such as proxyProtocol, ALPN and appProtocol (more details)
  • Local ExternalTrafficPolicy support for Ingress/Gateway API: External traffic can now be routed to node-local endpoints, preserving the client source IP address (more details)
  • L7 Envoy Proxy as dedicated DaemonSet: With a dedicated DaemonSet, Envoy and Cilium can have a separate life-cycle from each other. Now on by default for new installs (more details)
  • Host Network mode & Envoy listeners of subset of nodes: Cilium Gateway API Gateway/Ingress can now be deployed on the host network and on selected nodes (more details)

Security

  • Port Range support in Network Policies: This long-awaited feature has been implemented into Cilium (more details)
  • Network Policy validation status: kubectl describe cnp <name> will be able to tell if the Cilium Network Policy is valid or invalid (more details)
  • Control Cilium Network Policy Default Deny behavior: Policies usually enable default deny for the subject of the policies, but this can now be disabled on a per-policy basis (more details)
  • CIDRGroups support for Egress and Deny rules: Add support for matching CiliumCIDRGroups in Egress and Deny policy rules (more details)
  • Load “default” Network Policies from filesystem: In addition to reading policies from Kubernetes, Cilium can be configured to read policies locally (more details)
  • Select nodes as the target of Cilium Network Policies: With new ToNodes/FromNodes selectors, traffic can be allowed or denied based on the labels of the target Node in the cluster (more details)

Day 2 operations and scale

  • Improved DNS-based network policy performance: Reduction of up to 5x reduction in tail latency for DNS-based network policies (more details)
  • New ELF loader logic: With this new loader logic, the median memory usage of Cilium was decreased by 24% (more details)
  • KVStoreMesh default option for ClusterMesh: Introduced in Cilium 1.14, and after a lot of adoption and feedback from the community, KVStoreMesh is now the default way to deploy ClusterMesh (more details)

Hubble & observability

  • CEL filters support: Hubble supports Common Expression Language (CEL), which provides more complex conditions that cannot be expressed using the existing flow filters (more details)
  • Filtering Hubble flows by node labels: Filter Hubble flows observed on nodes matching the given label (more details)
  • Improvements for egress traffic path observability: Enhancements to Cilium Metrics and Hubble flow data for traffic that traverses Egress Gateways, allowing better troubleshooting of this popular Cilium feature (more details)
  • K8S event generation on packet drop: Hubble is now able to generate a k8s event for a packet dropped from a pod, and that can be verified with kubectl get events (more details)

Networking

Cilium netkit

Containerization has always come at a performance cost, with the most visible one on networking velocity. A standard container networking architecture would result in a 35% drop in network performance compared to the host. How could we bridge that gap?

Over the past 7 years, Cilium developers have added several features aimed at reducing this performance penalty.

With Cilium 1.16 and the introduction of Cilium netkit, you can finally achieve performance parity between host and container. 

Cilium is the first public project providing built-in support for netkit, a Linux network device introduced in the 6.7 kernel release and developed by Isovalent engineers.

To learn more, read our deep dive into the journey into high-performance container networking, with netkit the final frontier.

Cilium netkit: The Final Frontier in Container Networking Performance

Learn how Cilium can now provide host-level performance.

Read Deep Dive

Service traffic distribution

Cilium 1.16 will support Kubernetes’ new “Traffic Distribution for Services” model which aims at providing topology-aware traffic engineering. It can be considered the successor to features such as Topology-Aware Routing and Topology-Aware Hints. 

Introduced in Kubernetes 1.30, Service Traffic Distribution enables users to express preferences on traffic policy for a Kubernetes Service. Currently, the only supported value is “PreferClose”, indicating a preference for routing traffic to endpoints that are topologically proximate to the client. 

By keeping the traffic within a local zone, users can optimize performance, cost and reliability.

Introduced in Kubernetes 1.30, Service Traffic Distribution can be enabled directly in the Service specification (rather than using annotations as it was the case with Topology-Aware Hints & Routing):

apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  ports:
    - port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: web
  type: ClusterIP
  trafficDistribution: PreferClose

In the demo below, we have 6 backend pods across two Availability Zones. By default, traffic to the Service is distributed across all 6 pods, regardless of their location.

When enabling Traffic Distribution, you can see that the traffic from the client is only sent to the 3 pods in the same AZ. When no local backend is unavailable, traffic is forwarded to backends outside the local zone.

Alongside Service Traffic Distribution support, Cilium is also introducing tooling to monitor traffic inter- and intra-zones. Given that Service Traffic Distribution and its predecessors all derive the zones from labels placed on the nodes, users can now monitor cross-zone traffic using Hubble’s new node labels capabilities described later in this blog post and identify how to reduce cost and latency.

To learn more, check out the Cilium documentation.

Local Redirect Policy promoted to Stable

Service Traffic Distribution is not the only traffic engineering feature we wanted to highlight in this release. Local Redirect Policy was initially introduced in Cilium 1.9 and is commonly adopted by users looking at optimizing networking. Just as Service Traffic Distribution helps keep the traffic within the zone, Local Redirect Policy prevents traffic from even leaving the node by directing the traffic bounds for services to the local backend.

It’s been particularly popular for DNS use cases: use it with NodeLocal DNSCache to improve DNS lookup latency in your clusters.

NodeLocal DNSCache relies on a DNS caching agent running on each cluster node as a DaemonSet. With Cilium’s eBPF-based Local Redirect Policy, DNS traffic from Pod goes to the DNS cache running on the same node as the Pod, reducing latency amongst other benefits.

apiVersion: "cilium.io/v2"
kind: CiliumLocalRedirectPolicy
metadata:
  name: "nodelocaldns"
  namespace: kube-system
spec:
  redirectFrontend:
    serviceMatcher:
      serviceName: kube-dns
      namespace: kube-system
  redirectBackend:
    localEndpointSelector:
      matchLabels:
        k8s-app: node-local-dns
    toPorts:
      - port: "53"
        name: dns
        protocol: UDP
      - port: "53"
        name: dns-tcp
        protocol: TCP

To learn more, check out the documentation or read how Trendyol, one of the five largest e-commerce platforms in EMEA, is optimizing cluster performance with Local Redirect Policy.

BGPv2

Most applications running Kubernetes clusters need to talk to external networks. In self-managed Kubernetes environments, BGP is often used to advertise networks used by Kubernetes Pods and Services, to make applications accessible from traditional workloads.

Cilium has natively supported BGP since the 1.10 release. It’s quickly become a popular feature as users appreciate not having to install a separate BGP daemon to connect their clusters to the rest of the network. We’ve seen thousands of users taking our “BGP on Cilium” and “Advanced BGP on Cilium” labs and collecting badges along the way. 

What we’ve noticed recently is an acceleration of its adoption in scenarios such as:

  • Users looking at accessing KubeVirt-managed Virtual Machines running on Red Hat OpenShift clusters
  • Users looking at connecting their external network fabric (such as Cisco ACI) to Kubernetes clusters

Expect some content on both architectures in the coming weeks and months.

This fast adoption of BGP is, however, a victim of its own success. The current method to deploy BGP on Cilium is through a single CustomResourceDefinition CiliumBGPPeeringPolicy, which comes with a couple of drawbacks: 

  • We need to explicitly enumerate per-neighbor settings, even when they match between multiple peers
  • All Peers currently get the same advertisement (there’s no control such as concepts such as prefix-list)

Simply put, while the CRD worked in simple topologies, sophisticated networking topologies require a better set of abstractions, similar to what most of the popular BGP daemons use (e.g., peer templates, route maps, etc.…). 

To provide users the flexibility they need, Cilium 1.16 is introducing a new API – BGPv2 APIs.

Instead of a single CRD, you will be able to use a new set of CRDs to define complex network policies and configurations, making management more modular and scalable within Cilium.

  • CiliumBGPClusterConfig: Defines BGP instances and peer configurations applied to multiple nodes.
  • CiliumBGPPeerConfig: A common set of BGP peering settings that can be used across multiple peers.
  • CiliumBGPAdvertisements: Defines prefixes that are injected into the BGP routing table.
  • CiliumBGPNodeConfigOverride: Defines node-specific BGP configuration to provide a finer control.

Here is a sample BGP configuration – you can find it in the containerlab examples in this repo.

apiVersion: cilium.io/v2alpha1
kind: CiliumBGPClusterConfig
metadata:
  name: cilium-bgp
spec:
  nodeSelector:
    matchLabels:
      bgp: "65001"
  bgpInstances:
  - name: "65001"
    localASN: 65001
    peers:
    - name: "65000"
      peerASN: 65000
      peerAddress: fd00:10::1
      peerConfigRef:
        name: "cilium-peer"
    - name: "65011"
      peerASN: 65011
      peerAddress: fd00:11::1
      peerConfigRef:
        name: "cilium-peer"

---
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPPeerConfig
metadata:
  name: cilium-peer
spec:
  authSecretRef: bgp-auth-secret
  gracefulRestart:
    enabled: true
    restartTimeSeconds: 15
  families:
    - afi: ipv4
      safi: unicast
      advertisements:
        matchLabels:
          advertise: "pod-cidr"
    - afi: ipv6
      safi: unicast
      advertisements:
        matchLabels:
          advertise: "pod-cidr"

---
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPAdvertisement
metadata:
  name: pod-cidr-advert
  labels:
    advertise: pod-cidr
spec:
  advertisements:
    - advertisementType: "PodCIDR"
      attributes:
        communities:
          wellKnown: [ "no-export" ]

---
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPNodeConfigOverride
metadata:
  name: bgpv2-cplane-dev-multi-homing-control-plane
spec:
  bgpInstances:
    - name: "65001"
      routerID: "1.2.3.4"

---
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPNodeConfigOverride
metadata:
  name: bgpv2-cplane-dev-multi-homing-worker
spec:
  bgpInstances:
    - name: "65001"
      routerID: "5.6.7.8"

For those already running the existing BGPv1 feature: note that the v1 APIs will still be available in Cilium 1.16 but they will eventually be deprecated. Migration recommendations and tooling to help you move to v2 are on the roadmap.

We recommend you start any new BGP deployments with the new v2 APIs.

To learn more, visit the docs.

BGP ClusterIP advertisement

In addition to the new BGP APIs, Cilium 1.16 introduces support for new service advertisements. In prior releases, Cilium BGP could already announce the PodCIDR prefixes (for various IPAM scenarios) and LoadBalancer IP services. In 1.16, ExternalIP and ClusterIP Services can also be advertised. 

The latter might seem like an anti-pattern: ClusterIP Services are designed for internal access only. But there were 2 reasons why this feature was requested:

  1. Many users are migrating from other CNIs to Cilium, and some CNIs already support ClusterIP advertisements. 
  2. ClusterIP services automatically get a DNS record like svc.namespace.svc.cluster.example. So, by synchronizing your Kubernetes Services upstream, you could access your services via their name from outside the cluster.

In the demo below, we start before configuring the BGP session. You can see the deathstar service’s IP, label, and the BGP config. Note how we now advertise ClusterIP Services but only those with the empire label. We end by checking that the BGP sessions have come up and that the backbone router can see the prefix.

If you’d like to learn more, read the docs.

Node IPAM Service LB

For users that cannot use BGP or Cilium’s L2 Announcement (a feature particularly appreciated for homelabs and a great replacement for MetalLB), Cilium 1.16 is introducing another alternative to access services from outside the cluster: Node IPAM Service LB

Similar to the ServiceLB feature available for the lightweight distribution K3S, Node IPAM Service LB assigns IP Addresses from the nodes themselves to the Kubernetes Services.

Enable the node IPAM feature with the nodeIPAM.enabled=true flag and make sure your Service has the loadBalancerClass set to io.cilium/node:

apiVersion: v1
kind: Service
metadata:
  name: my-loadbalancer-service
  namespace: default
spec:
  loadBalancerClass: io.cilium/node 
  type: LoadBalancer
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

Once creation of the service, it will receive an IP address from the node itself. This option is likely to be useful in small environments.

$ kubectl get nodes -o wide
NAME                 STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
kind-control-plane   Ready    control-plane   77m   v1.29.2   172.18.0.4    <none>        Debian GNU/Linux 12 (bookworm)   6.8.0-1008-gcp   containerd://1.7.13
kind-worker          Ready    <none>          77m   v1.29.2   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.8.0-1008-gcp   containerd://1.7.13
kind-worker2         Ready    <none>          77m   v1.29.2   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.8.0-1008-gcp   containerd://1.7.13

$ kubectl apply -f svc.yaml 
service/my-loadbalancer-service created

$ kubectl get svc my-loadbalancer-service 
NAME                      TYPE           CLUSTER-IP     EXTERNAL-IP             PORT(S)        AGE
my-loadbalancer-service   LoadBalancer   10.96.197.50   172.18.0.2,172.18.0.3   80:31453/TCP   14s

To learn more about this feature, check out the documentation.

Multicast datapath

Multicast is a delivery mechanism popular in traditional IP networking. It provides a smarter and more economical method to deliver packets to a subset of interested parties. Perfect for publishers/subscribers-based applications, it is common for financial applications. 

Cilium 1.16 supports a multicast datapath, enabling Kubernetes users to deploy applications and benefit from multicast’s efficiencies.

To configure multicast on Cilium’s open source edition, you will need to define the multicast group (represented by IP addresses in the Class D reserved range 224.0.0.0 to 239.255.255.255) and the nodes that might run subscribers Pods. This can be done by manually setting them on the relevant Cilium agents, using the cilium-dbg CLI.

To learn more about it, check out the documentation.

For a fully-managed CRD-based experience, you can instead use Isovalent Enterprise for Cilium. You can learn more about it in a couple of recent blog posts:

Per-pod fixed MAC address

Some applications require software licenses to be based on network interface MAC addresses.

With Cilium 1.16, you will be able to set a specific MAC address for your pods, which should make licensing and reporting easier.

Simply add a specific annotation to your pod with the MAC address value:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.cilium.io/mac-address: be:ef:ca:fe:d0:0d
  name: pod-with-fixed-mac-address
spec:
  containers:
  - name: netshoot
    image: nicolaka/netshoot:latest
    command: ["sleep", "infinite"]

For more information, check out the docs or watch the very short demo below:

Service mesh & Ingress/Gateway API

Gateway API GAMMA support

The Cilium Service Mesh announcement back in 2021 had wide ramifications. It made our industry entirely reconsider the concept of a service mesh and reflect on the widely-accepted sidecar-based architecture. Why did we need a service mesh in the first place? Was it for traffic observability? To encrypt the traffic within our cluster? Ingress and L7 load-balancing? And do we really need a sidecar proxy in each one of our pods?

It turns out that Cilium could already do a lot of these things natively: network policies, encryption, observability, tracing. When Cilium added support for Ingress and Gateway API to manage traffic coming into the cluster (North-South), it further alleviated the need to install and manage additional third-party tools ; simplifying the life of platform operators.

One of the remaining areas of improvements for Cilium Service Mesh capabilities was traffic management within the cluster: it was possible through customizing the onboard Envoy proxy but it required advanced knowledge of the proxy.

With Cilium 1.16, Cilium Gateway API can now be used for sophisticated East-West traffic management – within the cluster – by leveraging the standard Kubernetes Gateway API GAMMA.

GAMMA stands for “Gateway API for Mesh Management and Administration”. It provides a consistent model for east-west traffic management for the cluster, such as path-based routing and load-balancing internally within the cluster.

Let’s review a GAMMA configuration:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: gamma-route
  namespace: gamma
spec:
  parentRefs:
  - group: ""
    kind: Service
    name: echo
  rules:
  - matches:
    - path:
        type: Exact
        value: /v1
    backendRefs:
    - name: echo-v1
      port: 80
  - matches:
    - path:
        type: Exact
        value: /v2
    backendRefs:
    - name: echo-v2
      port: 80

You will notice that, instead of attaching a route to a (North/South) Gateway like we’ve done so far when using Gateway API for traffic entering our cluster, we can now attach the route to a parent Service, called echo,  using the parentRefs field.

Traffic bound to this parent service will be intercepted by Cilium and routed through the per-node Envoy proxy.

Note how we will forward traffic to the /v1 path to the echo-v1 service and the same for v2. This is how we can, for example, do a/b or green/blue canary testing for internal apps.

To learn more, try the newly updated Advanced Gateway API lab, read to the docs or watch this video:

Gateway API enhancements

In addition to GAMMA support, Cilium 1.16’s Gateway API implementation has been boosted with multiple enhancements:

Gateway API 1.1 support:

in Gateway API 1.1, several features are graduating to Standard Channel (GA), notably including support for GAMMA (mentioned above) and GRPCRoute (supported in Cilium since Cilium 1.15). Features on the Standard release channel denotes a high level of confidence in the API surface and provides guarantees of backward compatibility.

New protocol options support:

Cilium 1.16 Gateway API now support new protocol options:

  • proxyProtocol: Some Load Balancing solutions use the HAProxy Proxy Protocol to pass source IP information along. With this new feature, Cilium will be able to pass PROXY protocol and will provide another option to preserve the source IP (another one is highlighted below).
  • ALPN: Application-Layer Protocol Negotiation is a TLS extension required for HTTP/2. As gRPC is built on HTTP/2, when you enable TLS for gRPC you will also need ALPN to negotiate whether both client and server support HTTP/2.
  • appProtocol: Kubernetes 1.20 introduced appProtocol support for Kubernetes Services, enabling users to specify an application protocol for a particular port.

Local ExternalTrafficPolicy support for Ingress/Gateway API:

When external clients access applications running in your cluster, it’s sometimes useful to preserve the original client source IP for various reasons such as observability and security. Kubernetes Services can be configured with the externalTrafficPolicy set to Local to ensure the client source IP is maintained.

In Cilium 1.16, the Cilium-managed Ingress/Gateway API LoadBalancer Services’ external traffic policies can be configured globally via Helm flags or via dedicated Ingress annotation.

Envoy enhancements

Every Cilium release brings improvements to its usage of Envoy, the lightweight cloud native proxy. Envoy has been a core component of Cilium’s architecture for years and has always been relied upon to provide Layer 7 functionalities to complement eBPF’s L3/L4 capabilities.

Cilium 1.16 introduces some subtle changes to Envoy’s use within Cilium:

Envoy as a DaemonSet is now the default option:

Introduced in Cilium 1.14, the Envoy DaemonSet deployment option was introduced as an alternative to embedding Envoy within the Cilium agent. This option decouples Envoy from the Cilium agent, providing more opacity between the Cilium and Envoy lifecycles. In Cilium 1.16, Envoy as a DaemonSet is now the default for new installations.

To learn more, watch the video above or check out the documentation.

Host Network mode & Envoy listeners on subset of nodes

Host network mode allows you to expose the Cilium Gateway API Gateway directly on the host network. This is useful in cases where a LoadBalancer Service is unavailable, such as in development environments or environments with cluster-external load balancers.

gatewayAPI:
  enabled: true
  hostNetwork:
    enabled: true

Alongside this feature, you can use a new option to only expose the Gateway/Ingress functionality on a subset of nodes, rather than on all of them.

gatewayAPI:
  enabled: true
  hostNetwork:
    enabled: true
    nodes:
      matchLabels:
        role: infra
        component: gateway-api

This will deploy the Gateway API Envoy listener only on the Cilium Nodes matching the configured labels. An empty selector selects all nodes and continues to expose the functionality on all Cilium nodes. Note both of these features are also available for the Cilium Ingress Controller.

To learn more, consult the Cilium Gateway API docs and Cilium Ingress docs.

Security

Port Range support in network policies

Cilium 1.16 supports a long-awaited feature: support for port ranges in network policies

Before this, network policies would require you to list ports one by one in your network rule, even if they were contiguous.

The Port Range feature, announced in Kubernetes 1.21 and promoted to Stable in Kubernetes 1.25, lets you target a range of ports instead of a single port in a Network Policy, using the endPort keyword. 

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "range-port-rule"
spec:
  description: "L3-L4 policy to restrict range of ports"
  endpointSelector:
    matchLabels:
      app: server
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: client
    toPorts:
    - ports:
      - port: "8080"
        endPort: 8082
        protocol: TCP

In the demo below, we start by verifying we have access from a client across to 3 servers listening on ports 8080, 8081 and 8082. It’s successful until we deployed a Cilium Network Policy only allowing the 8080-8081 range, thereafter access to 8082 is blocked.

Access is successful again after expanding the range to 8080-8082.

This feature is available with Kubernetes Network Policies and the more advanced Cilium Network Policies.

Network policy validation status

Sometimes, Cilium cannot detect and alert when Network Policies are semantically incorrect until after deployment. 

The demo below shows that even though our network policy is missing a field, it seems accepted. The only way to find out it was rejected is by checking the verbose agent logs, which is not an ideal user experience.

Cilium 1.16 adds information about the network policy validation condition in the operator. This means that, as you can see in the demo, you can easily find the status of the policy – valid or invalid – by checking the object with kubectl describe cnp

Control Cilium network policy default deny behaviour

Initially introduced as an Enterprise feature in Isovalent Enterprise for Cilium 1.15, this feature has now been open-sourced.

This new feature controls the behavior of default deny for network policies, and allows platform owners to safely roll out new broad-based network policies, either namespace or cluster-wide, without the risk of disrupting existing traffic.

The below example policy is created to intercept all DNS traffic for observability purposes. However, the enableDefaultDeny for both ingress and egress traffic is set to false. This ensures no traffic is blocked, even if this policy is the first to apply to a workload. A network or security administrator can apply this policy to the platform, without the risk of denying legitimate traffic.

apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
  name: intercept-all-dns
spec:
  endpointSelector:
    matchExpressions:
      - key: "io.kubernetes.pod.namespace"
        operator: "NotIn"
        values:
        - "kube-system"
      - key: "k8s-app"
        operator: "NotIn"
        values:
        - kube-dns
  enableDefaultDeny:
    egress: false
    ingress: false
  egress:
    - toEndpoints:
        - matchLabels:
            io.kubernetes.pod.namespace: kube-system
            k8s-app: kube-dns
      toPorts:
        - ports:
          - port: "53"
            protocol: TCP
          - port: "53"
            protocol: UDP
          rules:
            dns:
              - matchPattern: "*"

The video below captures this example policy and shows the operational risk associated with applying a broad network policy to existing workloads. It then shows how to avoid that risk by using this new EnableDefaultDeny feature in Cilium Network Policies.

CIDRGroups support for Egress and Deny rules

Another feature designed to simplify the creation and management of network policy: in Cilium 1.16, you can now use CiliumCIDRGroups in Egress and Ingress/Egress Deny Policy rules. A CiliumCIDRGroup is a list of CIDRs that can be referenced as a single entity in Cilium Network Policies and Cilium Cluster Network Policies.

To consume this enhancement, first create a CIDR Group as per the YAML example below:

apiVersion: cilium.io/v2alpha1
kind: CiliumCIDRGroup
metadata:
  name: example-cidr-group
spec:
  externalCIDRs:
    - "172.19.0.1/32"
    - "192.168.0.0/24"

You can then use the meta name from the CiliumCIDRGroup within the policy as below for Egress Policies:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "egress-example-cidr-group-ref-policy"
  namespace: "default"
spec:
  endpointSelector:
	matchLabels:
  	app: my-service
  egress:
  - toCIDRSet:
	- cidrGroupRef: "example-cidr-group"

Load “default” policies from filesystem

Over the past few years, Cilium has become most cloud providers’ networking platform of choice. After Google (GKE) and Amazon (EKS-A), it was Microsoft that, in 2022, announced that Cilium would be the preferred CNI for AKS (Azure Kubernetes Services).

Microsoft has also contributed several features to Cilium, including this latest enhancement to load network policies from the filesystem

This feature will be particularly popular for cloud providers and operators of multi-tenant platforms. Fetching the YAML manifests from a folder lets you load common network policies (Cilium Network Policies and Clusterwide Cilium Network Policies) on Cilium deployments. Copy policies on the underlying disk, mount the directory onto the Cilium agent, activate the feature, and the policies will be enforced.

$ for node in kind-worker kind-worker2 kind-control-plane; do docker exec $node mkdir -p /policies && docker cp ./examples/policies/l7/http/http.yaml $node:/policies/; done
Successfully copied 2.05kB to kind-worker:/policies/
Successfully copied 2.05kB to kind-worker2:/policies/
Successfully copied 2.05kB to kind-control-plane:/policies/

$ cat values.yaml 
extraArgs:
  - --static-cnp-path=/policies
extraHostPathMounts:
  - name: static-policies
    mountPath: /policies
    hostPath: /policies
    hostPathType: Directory

$ cilium install --version 1.16.0 --namespace kube-system --values values.yaml
🔮 Auto-detected Kubernetes kind: kind
✨ Running "kind" validation checks
✅ Detected kind version "0.22.0"
ℹ️  Using Cilium version 1.16.0
🔮 Auto-detected cluster name: kind-kind
🔮 Auto-detected kube-proxy has been installed

This enables administrators to configure certain broader policies (designed, for example, to alllow/block traffic to certain internal endpoints) without them appearing as policy resources in Kubernetes.

Indeed, even though the policy is enforced (as the cilium-dbg policy get command output below shows), kubectl get cnp does not return any output.

$ kubectl get cnp
No resources found in default namespace.
$ kubectl -n kube-system exec -it daemonsets/cilium -- cilium-dbg policy get
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
[
  {
    "endpointSelector": {
      "matchLabels": {
        "any:app": "myService"
      }
    },
    "ingress": [
      {
        "toPorts": [
          {
            "ports": [
              {
                "port": "80",
                "protocol": "TCP"
              }
            ],
            "rules": {
              "http": [
                {
                  "path": "/path1$",
                  "method": "GET"
                },
                {
                  "path": "/path2$",
                  "method": "PUT",
                  "headers": [
                    "X-My-Header: true"
                  ]
                }
              ]
            }
          }
        ]
      }
    ],
    "labels": [
      {
        "key": "filename",
        "value": "http.yaml",
        "source": "directory"
      },
      {
        "key": "policy-derived-from",
        "value": "CiliumClusterwideNetworkPolicy",
        "source": "directory"
      }
    ],
    "enableDefaultDeny": {
      "ingress": true,
      "egress": false
    }
  }
]

To learn more, check the docs.

Select nodes as the target of Cilium Network Policies

The feature started as a CFP by Ondrej Blazek at Seznam, to provide Cilium Network Policies with the ability to use Node Labels with the policy selector statements. Before the Cilium 1.16 release, users who needed to filter traffic from/to Kubernetes nodes in their cluster using Network Policies would need to use either the “remote-node” entity or a CIDR-based policy. Using either of these methods had its pitfalls, such as remote-node selecting all nodes in a cluster mesh configuration.

Before Cilium 1.16, to target nodes in a Cilium Network Policy, you would use a policy as the example below:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "from-nodes"
spec:
  endpointSelector:
    matchLabels:
  	env: prod
  ingress:
  - fromEntities:
    - remote-node

Now, with this new feature, which allows nodes to be selectable by their labels instead of CIDR and/or remote-node entity, you can configure the helm value nodeSelectorLabels=true and use a policy such as the example below, which allows pods with the label env: prod, to communicate with control-plane nodes:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "to-prod-from-control-plane-nodes"
spec:
  endpointSelector:
	matchLabels:
  	env: prod
  ingress:
	- fromNodes:
    	- matchLabels:
        	node-role.kubernetes.io/control-plane: ""

The following extraVar can be configured to select which node labels are used for node identity.

--node-labels strings List of label prefixes used to determine identity of a node (used only when enable-node-selector-labels is enabled)

Day 2 operations and scale

Improved DNS-based network policy performance

One of the reasons Cilium Network Policies are so popular with cluster administrators are their ability to filter based on Fully-Qualified Domain Names (FQDN). 

DNS-aware Cilium Network Policies enable users to permit traffic based on specific FQDNs, for example, by using the toFQDN selector in network policies to only allow traffic to, for example,  my-remote-service.com. It’s supported by using a DNS proxy that is responsible for intercepting DNS traffic and recording IP addresses seen in the responses. 

DNS-Based Network policies are extremely useful when implementing API security; improving this feature’s performance ultimately offers a better developer and end-user experience. 

With this latest release, Cilium 1.16 has significantly improved CPU and memory usage and, even more crucially, up to 5x reduction in tail latency

The implementation of toFQDNs selectors in policies has been overhauled to improve performance when many different IPs are observed for a selector: Instead of creating cidr identities for each allowed IP, IPs observed in DNS lookups are now labelled with the selectors toFQDNs matching them. 

This reduces tail latency significantly for FQDNs with a highly dynamic set of IPs, such as e.g. content delivery networks and cloud object storage services. As you can see from the below graphs, with these enhancements, there is a 5x improvement in tail latency.

Elegantly, upon upgrade or downgrade, Cilium will automatically migrate its internal state for toFQDNs policy entries. For more information, consult the upgrade guide.

That’s not the only performance improvement in Cilium 1.16. Let’s dive into another enhancement.

New ELF loader logic

Theoretically, we must recompile the BPF for an endpoint whenever its configuration changes. Compiling is a fairly expensive process, so a mechanism named “ELF templating / substitution” had been developed to avoid recompilation in the most common cases. This substitution process was, however, sub-optimal. In Cilium 1.16, it has been improved, resulting into noticeable memory gains:

KVStoreMesh default option for ClusterMesh

Introduced in Cilium 1.14, KVStoreMesh’s concept was inspired by the wider Cilium user community. Trip.com, the leading Chinese travel services provider, had been running Cilium Cluster Mesh at a considerable scale and pushed the limits of its architecture. Trip.com eventually forked Cilium and adapted the model to overcome some of the reliability issues they saw at their hyperscale.

The solution was to have local caches (running in key-value stores) storing information about remote meshed clusters. This benefits not only extra large cluster mesh deployments but also smaller ones as it reduces the overall control plane resource utilization.

Since its introduction in Cilium 1.14, KVStoreMesh has received strong customer feedback and will become the default option from Cilium 1.16 onwards (note that you can still opt out and select the traditional method if you wish to do so).

For more information, check out the documentation.

Hubble and observability

CEL filters support

Common Expression Language (CEL) is an implementation of common semantics for expression evaluation. It is designed to be fast, portable, and safe for execution with performance-critical applications. 

Today, CEL can be found in cloud native technologies such as the Kubernetes API to declare validation rules, policy rules and other constraints and conditions.

The motivation behind bringing CEL to Hubble is to support more complex conditions that cannot currently be expressed using the existing flow filters. Currently, the Hubble observe command from the Hubble CLI is constrained by the limit that it can only use one AND filter against all parameters.

CEL also allows you to compare flow fields to others, which is not currently possible with the existing Flow Filters. 

The Hubble CLI now supports the additional argument –cel-expression filter. Below is an example of how to use this new feature. 

$ hubble observe --cel-expression "(_flow.l4.TCP.destination_port == uint(8080) || _flow.l4.TCP.destination_port == uint(443)) && !('reserved:host' in _flow.source.labels)"
Jul 17 13:06:10.370: monitoring/prometheus-prometheus-k8s-prometheus-0:55848 (ID:1866) -> monitoring/prometheus-kube-state-metrics-6db866c85b-k2w4l:8080 (ID:10243) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 17 13:06:25.370: monitoring/prometheus-prometheus-k8s-prometheus-0:55848 (ID:1866) -> monitoring/prometheus-kube-state-metrics-6db866c85b-k2w4l:8080 (ID:10243) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 17 13:06:40.370: monitoring/prometheus-prometheus-k8s-prometheus-0:55848 (ID:1866) -> monitoring/prometheus-kube-state-metrics-6db866c85b-k2w4l:8080 (ID:10243) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 17 13:06:55.370: monitoring/prometheus-prometheus-k8s-prometheus-0:55848 (ID:1866) -> monitoring/prometheus-kube-state-metrics-6db866c85b-k2w4l:8080 (ID:10243) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jul 17 13:08:58.398: endor/tiefighter-6b56bdc869-fpfrj:44048 (ID:12980) -> swapi.dev:443 (ID:16777218) to-stack FORWARDED (TCP Flags: ACK)
Jul 17 13:08:58.406: endor/tiefighter-6b56bdc869-fpfrj:44048 (ID:12980) -> swapi.dev:443 (ID:16777218) to-stack FORWARDED (TCP Flags: ACK, PSH)

This filter is designed to show flows where the destination port is either “8080” or “443”, and the source of these flows is not originating from an endpoint with the label “reserved:host”. By filtering out this special identity, we will not be shown any flows of traffic that originate from or are designated to one of the local host IPs.

Below is a more complex example: filtering for all flows where the source namespace does not match the destination namespace. This allows us to see all traffic that leaves a namespace. The final component of the filter is removing any flows whereby the source has the identification label of one of the reserved identities, such as world

$ hubble observe --cel-expression "(_flow.source.namespace != _flow.destination.namespace) && !(_flow.source.labels.exists(l, l.matches('^reserved:.*$')))"
Jul 17 14:11:45.121: 10.0.2.100:45324 (remote-node) <- kube-system/hubble-ui-647f4487ff-lskxl:8081 (ID:47001) to-overlay FORWARDED (TCP Flags: ACK, FIN)
Jul 17 14:11:45.359: endor/tiefighter-6b56bdc869-gwj8h:52756 (ID:19432) -> kube-system/coredns-76f75df574-sxcvm:53 (ID:5270) to-endpoint FORWARDED (UDP)
Jul 17 14:11:45.359: endor/tiefighter-6b56bdc869-gwj8h:52756 (ID:19432) <> kube-system/coredns-76f75df574-sxcvm (ID:5270) pre-xlate-rev TRACED (UDP)
Jul 17 14:11:45.366: kube-system/coredns-76f75df574-sxcvm:40581 (ID:5270) -> 8.8.8.8:53 (world) to-stack FORWARDED (UDP)
Jul 17 14:11:45.404: 10.0.2.100:46408 (remote-node) <- kube-system/hubble-ui-647f4487ff-lskxl:8081 (ID:47001) to-overlay FORWARDED (TCP Flags: ACK)
Jul 17 14:11:45.839: kube-system/hubble-relay-6b98cbcf75-w4blh:48860 (ID:22359) -> 172.18.0.2:4244 (remote-node) to-stack FORWARDED (TCP Flags: ACK)
Jul 17 14:11:45.844: endor/tiefighter-6b56bdc869-gwj8h:52605 (ID:19432) <- kube-system/coredns-76f75df574-sxcvm:53 (ID:5270) dns-response proxy FORWARDED (DNS Answer  TTL: 4294967295 (Proxy swapi.dev. AAAA))
Jul 17 14:11:45.844: kube-system/coredns-76f75df574-sxcvm:53 (ID:5270) <> endor/tiefighter-6b56bdc869-gwj8h (ID:19432) pre-xlate-rev TRACED (UDP)
Jul 17 14:11:45.844: endor/tiefighter-6b56bdc869-gwj8h:52605 (ID:19432) <- kube-system/coredns-76f75df574-sxcvm:53 (ID:5270) dns-response proxy FORWARDED (DNS Answer "52.58.110.120" TTL: 11 (Proxy swapi.dev. A))
Jul 17 14:11:45.844: kube-system/coredns-76f75df574-sxcvm:53 (ID:5270) <> endor/tiefighter-6b56bdc869-gwj8h (ID:19432) pre-xlate-rev TRACED (UDP)

For more information, check out the pull request.

Filtering Hubble flows by node labels

Hubble now captures the node labels for flows, allowing you to filter by particular nodes in your cluster. This can be helpful for Kubernetes deployments across availability zones and help to identify cross-availability zone traffic between workloads regardless of their source or destination namespaces, for example. 

With Hubble providing this level of visibility, it will help platform owners identify misconfigured services that allow cross-AZ traffic and cause an increase in costs from the cloud provider. Typically, most deployments should be set to local preferred traffic, with remote traffic set as a fallback, meaning all traffic should be localised within the same AZ, only traversing the AZ when the service fails. 

$ hubble observe --pod default/curl-75fd79b7-gjrgg --node-label topology.kubernetes.io/zone=az-a  --not --namespace kube-system --print-node-name 
Jul 19 11:32:35.739 [kind-kind/kind-worker2]: default/curl-75fd79b7-gjrgg:58262 (ID:13886) -> default/nginx-57fdc5ff77-qpnqp:80 (ID:1359) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 19 11:32:36.747 [kind-kind/kind-worker3]: default/curl-75fd79b7-gjrgg:56786 (ID:13886) -> default/nginx-57fdc5ff77-vncdm:80 (ID:1359) to-endpoint FORWARDED (TCP Flags: ACK)
Jul 19 11:32:36.747 [kind-kind/kind-worker]: default/curl-75fd79b7-gjrgg:56786 (ID:13886) -> default/nginx-57fdc5ff77-vncdm:80 (ID:1359) to-network FORWARDED (TCP Flags: ACK)

Below is a recording showing how to use these new filters to view cross-availability zone traffic. 

Improvements for egress traffic path observability

The Cilium Egress Gateway feature allows you to select defined exit routes on your network for your containers. This feature is particularly useful when traffic leaving your cluster transits via external traffic management devices. These devices need to understand the specific endpoints from which traffic originates. The Egress Gateway feature works by implementing deterministic source NAT for all traffic that traverses through a node; allocating a predictable IP to traffic coming from a particular Pod or a specific namespace.

In Cilium 1.16, several enhancements are implemented to aid better observability for traffic using Egress Gateway nodes. 

The first is the creation of additional metrics within the Cilium Agent, which tracks the number of allocated ports in each NAT connection tuple: {source_ip, endpoint_ip, endpoint_port}. These additional statistics help monitor the saturation of the endpoint connections based on the allocation of source ports. 

$ cilium-dbg statedb nat-stats
IPFamily   Proto    EgressIP                RemoteAddr                   Count
ipv4       TCP      10.244.1.89             10.244.3.86:4240             1
ipv4       TCP      10.244.1.89             10.244.0.170:4240            1
ipv4       TCP      172.18.0.2              172.18.0.5:4240              1
ipv4       TCP      172.18.0.2              172.18.0.3:4240              1
ipv4       TCP      172.18.0.2              172.18.0.5:6443              4
ipv4       ICMP     172.18.0.2              172.18.0.5                   6
ipv4       ICMP     172.18.0.2              172.18.0.3                   6
ipv6       TCP      [fc00:c111::2]          [fc00:c111::5]:4240          1
ipv6       TCP      [fd00:10:244:1::e8ce]   [fd00:10:244:3::6860]:4240   1
ipv6       TCP      [fc00:c111::2]          [fc00:c111::3]:4240          1
ipv6       TCP      [fd00:10:244:1::e8ce]   [fd00:10:244::b991]:4240     1
ipv6       ICMPv6   [fc00:c111::2]          [fc00:c111::5]               6
ipv6       ICMPv6   [fc00:c111::2]          [fc00:c111::3]               6

The Cilium Metric nat_endpoint_max_connection has also been implemented to monitor these statistics in your alerting platform. The Cilium documentation has also been updated to include a troubleshooting section for Egress Gateway explaining these new statistics and metrics in further detail.

Hubble flow data has been further updated with Egress Gateway traffic paths in mind. Earlier in this blog post, we’ve already covered the ability to capture and filter traffic flows based on Node Label, so let’s look at two further new filters.

In the below Hubble flow output, the pod xwing is contacting an external device on IP address 172.18.0.7; this traffic is subject to address translation by the Egress Gateway. 

The new fields implemented are:

  • IP.source_xlated
  • node_labels
  • interface

Here is a JSON output of a flow recorded by Hubble:

{
  "flow": {
	"time": "2024-07-18T13:58:32.826611870Z",
	"uuid": "39a6a8f3-53cf-4dc8-8cb0-ce19c558228c",
	"verdict": "FORWARDED",
	"ethernet": {
  	"source": "02:42:ac:12:00:06",
  	"destination": "02:42:ac:12:00:07"
	},
	"IP": {
  	"source": "10.244.3.136",
  	"source_xlated": "172.18.0.42",
  	"destination": "172.18.0.7",
  	"ipVersion": "IPv4"
	},
	"l4": {
  	"TCP": {
    	"source_port": 60288,
    	"destination_port": 8000,
    	"flags": {
      	"FIN": true,
      	"ACK": true
    	}
  	}
	},
	"source": {
  	"identity": 3706,
  	"cluster_name": "kind-kind",
  	"namespace": "default",
  	"labels": [
    	"k8s:class=xwing",
    	"k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default",
    	"k8s:io.cilium.k8s.policy.cluster=kind-kind",
    	"k8s:io.cilium.k8s.policy.serviceaccount=default",
    	"k8s:io.kubernetes.pod.namespace=default",
    	"k8s:org=alliance"
  	],
  	"pod_name": "xwing"
	},
	"destination": {
  	"identity": 2,
  	"labels": [
    	"reserved:world"
  	]
	},
	"Type": "L3_L4",
	"node_name": "kind-kind/kind-worker3",
	"node_labels": [
  	"beta.kubernetes.io/arch=amd64",
  	"beta.kubernetes.io/os=linux",
  	"egress-gw=true",
  	"kubernetes.io/arch=amd64",
  	"kubernetes.io/hostname=kind-worker3",
  	"kubernetes.io/os=linux"
	],
	"event_type": {
  	"type": 4,
  	"sub_type": 11
	},
	"traffic_direction": "EGRESS",
	"trace_observation_point": "TO_NETWORK",
	"trace_reason": "ESTABLISHED",
	"is_reply": false,
	"interface": {
  	"index": 13,
  	"name": "eth0"
	},
	"Summary": "TCP Flags: ACK, FIN"
  },
  "node_name": "kind-kind/kind-worker3",
  "time": "2024-07-18T13:58:32.826611870Z"
}

These additional fields are further combined with updates to the Hubble CLI, which includes the following new arguments;

hubble observe
  --interface filter           Show all flows observed at the given interface name (e.g. eth0)
  --snat-ip filter             Show all flows SNATed to the given IP address. Each of the SNAT IPs can be specified as an exact match (e.g. '1.1.1.1') or as a CIDR range (e.g.'1.1.1.0/24').

In the example below, we filter all traffic using the specific node label in our environment to identify egress gateway nodes. The traffic has been translated to the IP address 172.18.0.42.

$ hubble observe --node-label egress-gw=true  --snat-ip 172.18.0.42
Jul 18 13:58:32.439: default/xwing:60262 (ID:3706) -> 172.18.0.7:8000 (world) to-network FORWARDED (TCP Flags: SYN)
Jul 18 13:58:32.439: default/xwing:60262 (ID:3706) -> 172.18.0.7:8000 (world) to-network FORWARDED (TCP Flags: ACK)
Jul 18 13:58:32.439: default/xwing:60262 (ID:3706) -> 172.18.0.7:8000 (world) to-network FORWARDED (TCP Flags: ACK, PSH)
Jul 18 13:58:32.440: default/xwing:60262 (ID:3706) -> 172.18.0.7:8000 (world) to-network FORWARDED (TCP Flags: ACK, FIN)

In the recording below, you can see these new features to extend Egress Gateway Observability in action. 

K8S event generation on packet drop detection

Status: Alpha

This release extends the visibility of flow information from Hubble to Kubernetes Pod events. Platform consumers need additional access to the platform components to view any data generated by Cilium and Hubble. With this new feature, Cilium inserts packet drop information caused by Cilium Network Policies into the Kubernetes Pod events. Please note this feature is currently Alpha.

Below is an example output of the enhanced Kubernetes Pod Event showing the packet drop.

$ kubectl get events --namespace vault
LAST SEEN   TYPE  	REASON   	OBJECT                               	MESSAGE
32s     	Warning   PacketDrop      	pod/xwing-7f87ccb97b-fcbqc     	Outgoing packet dropped (policy_denied) to endor/deathstar-b4b8ccfb5-kwf4g (10.0.1.1) TCP/80

Enabling this feature is as simple as the below helm values;

hubble:
 dropEventEmitter:
   enabled: true
   interval: 2m
   # ref: https://docs.cilium.io/en/stable/_api/v1/flow/README/#dropreason
   Reasons: "auth_required","policy_denied"

Conclusions

Since the previous release, many new end users have stepped forward to tell their stories of why and how they’re using Cilium in production. These use cases cover multiple industries: software (G Data CyberDefense and WSO2 ), retail (Nemlig.com), news and media (SmartNews), finance (Sicredi, PostFinance, Rabobank) and cloud providers (DigitalOcean). Read below some of the testimonies:

To learn more from the developers involved in creating the features covered in this blog post, why don’t you join Duffie in the Cilium 1.16 eBPF and Cilium eCHO show:

Getting started

To get started with Cilium, use one of the resources below:

Previous releases

Nico Vibert
AuthorNico VibertSenior Staff Technical Marketing Engineer

Related

Blogs

Isovalent Enterprise for Cilium 1.15: eBPF-based IP Multicast, BGP support for Egress Gateway, Network Policy Change Tracker, and more!

Learn about the new features in Isovalent Enterprise for Cilium, including native IP multicast support!

By
Nico VibertDean LewisRaphaël Pinson
By
Thomas Graf
Blogs

Cilium 1.14 – Effortless Mutual Authentication, Service Mesh, Networking Beyond Kubernetes, High-Scale Multi-Cluster, and Much More

Cilium 1.14 - Effortless Mutual Authentication, Service Mesh, networking beyond Kubernetes, high-scale multi-cluster, and much more

By
Thomas Graf

Industry insights you won’t delete. Delivered to your inbox weekly.