Isovalent Enterprise for Cilium 1.15: eBPF-based IP Multicast, BGP support for Egress Gateway, Network Policy Change Tracker, and more!
We are delighted to announce Isovalent Enterprise for Cilium 1.15, introducing IP Multicast!
Isovalent Enterprise for Cilium is the hardened, enterprise-grade, and 24×7-supported version of the eBPF-based cloud networking platform Cilium. In addition to all features available in the open-source version of Cilium, the enterprise edition includes advanced networking, security, and observability features popular with enterprises and telco providers.
Isovalent’s new enterprise release of Cilium now supports IP Multicast and the ability to send traffic to a subset of recipients instead of forwarding packets to a single machine (unicast) or all machines (broadcast).
Popular for publisher/subscriber-type applications (audio/video streaming, software distribution, and especially financial market data feeds) and IoT/edge scenarios, multicast has been widely adopted across traditional IP networks for decades. With this new Isovalent Enterprise for Cilium release, Kubernetes users can now distribute traffic efficiently across the cluster through Cilium’s eBPF-based multicast datapath.
Isovalent Enterprise for Cilium 1.15 also introduces new Egress features: Cilium BGP can now advertise Egress Gateway IP addresses and Egress Gateway is now topology-aware, helping users minimize data transfer costs and optimize routing.
To assist operators with troubleshooting and auditing network policies, we are introducing a Policy Change Tracker within the Hubble UI. Users will be able to monitor and compare network policy versions, quickly identify breaking changes, and easily restore previous policies.
What is new in Cilium & Isovalent Enterprise for Cilium 1.15?
This release is based on the open source release of Cilium 1.15 and includes all 1.15 features. As a reminder, Cilium 1.15 introduced the following enhancements and improvements:
- Gateway API 1.0 Support: Cilium now supports Gateway API 1.0 (more details)
- Gateway API gRPC Support: Cilium can now route gRPC traffic, using the Gateway API (more details)
- Annotation Propagation from GW to LB: Both the Cilium Ingress and Gateway API can propagate annotations to the Kubernetes LoadBalancer Service (more details)
- Ingress Network Policy: Enforce ingress network policies for traffic inbound via Ingress + GatewayAPI (more details)
- BGP Security: Support for MD5-based session authentication has landed (more details)
- BGP Traffic Engineering: New BGP attributes (communities and local preference) are now supported (more details)
- Cluster Mesh Twofold Scale Increase: Cluster Mesh now supports 511 meshed clusters (more details)
- Terraform/OpenTofu & Pulumi Cilium Provider: You can now deploy Cilium using your favourite Infra-As-Code tool (more details)
- Hubble Redact: Remove sensitive information from Hubble output (more details)
- Hubble Exporter: Export Hubble flows directly to logs for shipping to any SIEM platform (more details)
- eBPF-based Multicast: Multicast is now supported in Cilium’s data path, controlled using new CLI commands for multicast management (more details)
In addition to the above, the Isovalent Enterprise release provides these new capabilities:
- Fully managed eBPF-based Multicast: Custom Resource Definition-based control plane to simplify and automate multicast operations (more details)
- BGP support for Egress Gateway: BGP can now advertise Egress Gateway IP addresses (more details)
- Topology-Aware Egress Routing: The egress gateway selection process can now be aware of the physical topology (more details)
- Hubble Policy Change Tracker: The Hubble UI now includes a timeline of changes to Network Policies to help users audit and troubleshoot (more details)
- Cluster Mesh Mixed Routing Mode: Clusters can now be meshed even if they run different datapath modes
- Control Default Deny Policy behavior: Ability to safely roll out broad-based network policies, without the risk of disrupting existing traffic (more details)
- Cilium on Red Hat OpenShift on AWS (ROSA): Bring Cilium to your managed Red Hat OpenShift instances running in AWS
- Cilium on Talos Linux: Isovalent engineering has now validated and built a testing framework for deploying this and future versions of Isovalent Enterprise for Cilium to Talos Linux.
eBPF-based IP Multicast
In the late 1980s, Dr Steve Deering was working on a project that required him to send a message from one computer to a group of computers across a Layer 3 network. Dr Deering’s research led him to the conclusion that routing protocols of that time did not support this functionality.
Dr Deering eventually published a doctoral thesis on multicast and is thus known as the inventor of IP multicast (anecdotally, he also happens to be the lead designer of IPv6).
IP Multicast is about “sending a message from a single source to selected multiple destinations across a Layer 3 network in one data stream” (even after all these years, my CCIE 3.0 study book was worth keeping).
Delivery Method | Message Delivery Mechanism |
Unicast | Message sent from one source to one destination |
Broadcast | Message sent from one source to all the destinations on the local network |
Multicast | Message sent from one source to selected multiple destinations across a routed network in one data stream |
Let’s use a few analogies to explain these concepts. Unicast is like sending a personal letter or making a phone call. Broadcast is comparable to a public announcement over a loudspeaker. Multicast is similar to tuning in to a radio station or even subscribing to a podcast; only the audience interested in the stream will receive it.
Many of the use cases IP multicasting was popular with — optimal distribution of audio/video, software updates, and financial market data feeds — are just as valid today in Kubernetes.
IP Multicast Overview
Let’s review the fundamental aspects of multicast in traditional IP networks. Multicast relies on two simple concepts: a sender (the source, emitting the traffic to a specific multicast group) and a set of subscribers (receiving the traffic). The subscribers need to join the multicast IP address, often called a multicast group.
Multicast layer 3 addresses range from 224.0.0.0-239.255.255.255
(with 239.0.0.0-239.255.255.255
the well-known range for private addresses).
The network requires a mechanism to let subscribers inform the rest of the network that they’d like to join (or leave) a specific group. To do so, they would typically send Internet Group Management Protocol (IGMP) join/leave packets to the network, which would then process (or snoop) the packets and distribute the traffic only onto the interfaces where subscribers have been seen.
As illustrated below, the benefits are reflected in smarter network utilization. Imagine a scenario where you have 4 clients located across 4 nodes, with 2 subscribers interested in a network feed and 2 non-subscribers.
- In unicast, when the source starts emitting traffic, it would need to send 2 copies of the packets, which would then make their way to the subscribers.
- Broadcasting can be wasteful. The source sends one copy of the packet, which is then replicated and sent to all hosts, even those that don’t need to receive it.
- Multicast is the most efficient – a single packet is sent and replicated only to the nodes with subscribers.
Let’s now see how Isovalent Enterprise for Cilium supports multicast for Kubernetes pods.
IP Multicast with Isovalent Enterprise for Cilium
Let’s enable the multicast feature with the Cilium CLI:
The multicast data path is now enabled. It can be programmed through two methods:
- Manually (for open source and enterprise users), using the CLI on the Cilium agent
- Via a
CustomResourceDefinition
(for Isovalent Enterprise for Cilium customers only)
Let’s see how multicast in Kubernetes is simple, with the IsovalentMulticastGroup
CRD. We will create 3 multicast groups with the following manifest.
Once this manifest has been deployed and these groups configured, Cilium will snoop IGMP join and leave messages with the group address (IGMPv2 and IGMPv3 are both supported).
At first, there’s no endpoint subscribed to it:
Let’s deploy a couple of pods and subscribe them to the 225.0.0.11
multicast group:
When checking on the Cilium agent on the node where the pod is deployed, we can now see our subscribed pods:
To show that multicast works across the cluster, let’s jump on a pod on a different node than our subscribers:
Let’s transmit some traffic to the multicast group:
We can see the multicast traffic arriving at our subscribed pods and only to them:
To observe multicast traffic (which can be a challenging aspect in traditional networks), you can even use Isovalent Enterprise for Tetragon to monitor multicast traffic and display them via Prometheus and Grafana. You can even visualize metrics for a specific multicast group.
If you’d like to learn more, you can start our new Multicast lab or schedule a demo with our experts.
Multicast Lab
In this lab, learn about the new Isovalent Enterprise for Cilium 1.15 feature - Multicast!
Start LabBGP support for Egress Gateway
Egress Gateway is a popular Cilium feature: the ability to force traffic to exit via a specific egress node and to be masqueraded with a predictable source IP enables engineers to control and secure traffic leaving Kubernetes clusters. Isovalent Enterprise for Cilium includes High Availability support for organizations with mission-critical requirements.
What is often overlooked is the return traffic back to the cluster. The specific egress IPs need to be known to the external network. Until now, there was no automated way of advertising these IPs to the external network.
Isovalent Enterprise for Cilium addresses by introducing BGP support for Egress Gateway IPs.
BGP is one of the most commonly used methods to connect Kubernetes clusters to existing networks and Cilium’s built-in implementation of BGP enables seamless connectivity between Kubernetes services and the existing network.
This new feature introduces support for BGP in advertising the Egress NAT IPs to external routers. This is especially useful in scenarios where an extra IP on an egress NIC has been configured and, more broadly, when the egress IP is not reachable by the underlay by default.
To learn more about this feature and the broader design considerations of External Traffic Engineering, watch Piotr’s and Michael’s session from the recent Cloud Native Rejekts conference.
Topology-Aware Egress Routing
Organizations often deploy clusters across multiple zones to avoid area-wide failures. While it improves resilience and availability, it introduces potential sub-optimal routing, increased latency and extra data transfer costs.
By default, Kubernetes and tools like Cilium are unaware of the underlying areas and physical locations where clusters and nodes run. Concepts such as Topology Aware Routing (also known as Topology Aware Hints) have been introduced to address this shortcoming.
Isovalent Enterprise for Cilium 1.15 adds physical topology awareness to the egress gateway selection process.
Users can now rely on the well-known Node label topology.kubernetes.io/zone
to augment the default traffic distribution in a group of HA egress gateways. This helps with optimising latency and reducing cross-zone traffic costs.
To familiarize yourself with Egress Gateway and some of the features highlighted in this post, why don’t you start our Egress Gateway lab?
Egress Gateway Lab
Ensure your Kubernetes traffic exists the cluster with a predictable IP by using the Egress Gateway feature!
Start Egress Gateway LabNetworking Observability with Hubble
As mentioned at the start of this blog post, Cilium 1.15 OSS features are also available in this enterprise release, this also includes enhancements and features for Hubble.
- Hubble CLI enhancements: new CLI enhancements provide easier filtering of flows across your Kubernetes networking platform (more details)
- Hubble Redact: Remove sensitive information from Hubble output (more details)
- Hubble Exporter: Export Hubble flows directly to logs, for shipping to any SIEM platform (more details)
Below, you can check out one of our latest videos, which covers the Hubble Exporter feature in more detail.
Hubble Policy Change Tracker
We have recently published a series of blog posts to help Kubernetes users improve the security of their clusters by adopting network policies:
- Part 1: Introduction to Cilium Network Policies
- Part 2: Tutorial: Cilium Network Policy in Practice
- Part 3: Tutorial: Using The Network Policy Editor
In addition to the tutorials above, you can leverage Isovalent Enterprise for Cilium and the enterprise edition of Hubble to assist you with troubleshooting network policies. We’ve talked previously about how Hubble Enterprise can improve the cluster’s security posture with network policies and provide historical data for forensics purposes.
In this new release, we are introducing a built-in Policy Change Tracker, which will be a very welcome tool for troubleshooting network policies and auditing changes. Let’s take you through an example.
Imagine you get notified about application connectivity issues. Using the Hubble CLI or interface, you can quickly assess that flows to the zookeeper
and the coreapi
services are being dropped. You can also see that there have been three changes to Network Policies indicated by the orange dots:
Clicking on one of these dots will show you what change happened at that moment. In our example, the middle dot indicates that the allow-all-within-namespace
CiliumNetworkPolicy was deleted. This was when flows to zookeeper
service started to drop.
With the Policy Change Tracker, you can identify when and how network policies were modified on a namespace basis.
For our example, clicking on the Network Policy change indicator shows you the version of the allow-all-within-namespace
CiliumNetworkPolicy just before it was deleted. The policy allowed flows from or to any endpoint in the namespace.
The Network Policy view can also show you what exactly has changed between the two versions by clicking on the button just next to the version selection dropdown.
In our example, you can see that the destination port in the l7-ingress-visibility
CiliumNetworkPolicy was changed. This is what caused the dropped flows to the coreapi
.
With this feature, you can spot unintended changes to network policies and revert them by downloading previous versions.
Check out the feature in action with a walkthrough from our Observability specialist, Dean Lewis.
Control Cilium Network Policy Default Deny behaviour
Cilium provides fantastic granular network policy control across your cloud native platform beyond the scope and features of Kubernetes network policies. It’s one of the main reasons customers use Cilium in their environments.
In this enterprise release, we introduce the ability to allow users to create network policies that do not implicitly set endpoints into a default-deny mode. This feature will become available in Cilium OSS 1.16; however, as part of our Enterprise support value, we have backported this new feature into the Isovalent Enterprise for Cilium 1.15 release.
If we consider a large Kubernetes cluster, made up of multi-tenant workloads, the owner of the platform may wish to start to introduce new policies that:
- Proxy all DNS requests for monitoring purposes
- Deny access to sensitive IPs, for example, Cloud Provider metadata APIs
- Introduce new cluster-wide tooling, such as security scanners or data protection software
Cilium Network Policies already provide the necessary policy language and features to implement these types of policies today. However, applying such broad policies to existing platforms carries an operational risk.
These new broader policies may be the first policy to apply to an existing workload endpoint in the cluster. Therefore, the platform owner may unintentionally block traffic for workloads, that currently operate in a default allow-all mode, as no previous policy was applied to them. Once the new broad policy is applied, anything that does not match the ruleset is automatically denied.
This new feature, which controls the behavior of default deny for network policies, allows platform owners to safely roll out new broad-based network policies, either namespace or cluster-wide, without the risk of disrupting existing traffic.
The below example policy is created to intercept all DNS traffic for observability purposes. However, the enableDefaultDeny
for both ingress and egress traffic is set to false
. This ensures that no traffic is blocked, even if this policy is the first one to be applied to a workload. A network or security administrator can apply this policy to the platform, without the risk of denying legitimate traffic.
The video below captures using this example policy and shows the operational risk associated with applying a broad network policy to existing workloads and then avoiding that risk by using this new EnableDefaultDeny
feature in Cilium Network Policies.
Core Isovalent Enterprise for Cilium features
Advanced Networking Capabilities
In addition to all the core networking features available in the open source edition of Cilium, Isovalent Enterprise for Cilium also includes advanced routing and connectivity features popular with large enterprises and Telco, including:
- Multi Networking and the ability to connect a pod to multiple networks
- Segment Routing version 6 (SRv6) L3 VPN support
- Overlapping PodCIDR support for Cluster Mesh
- Phantom Services for Cluster Mesh
Platform Observability, Forensics, and Auditing
Isovalent Enterprise for Cilium includes Role-Based Access Control (RBAC), enabling platform teams to provide users access to network data and dashboards relevant to their namespaces, applications, and environments.
From the Hubble Enterprise UI, operators can use the built-in Network Policy Editor to create network policies based on actual cluster traffic.
Isovalent Enterprise for Cilium also includes Hubble Timescape – a time machine with powerful analytics capabilities for observability data.
While Hubble OSS only includes real-time information, Hubble Timescape is an observability and analytics platform capable of storing & querying observability data that Cilium and Hubble collect.
Isovalent Enterprise for Cilium also supports logs export to SIEM (Security Information and Event Management) platforms such as Splunk or an ELK (Elasticsearch, Logstash, and Kibana) stack.
To explore some of the Hubble enterprise features, check out the Hubble for the Enterprise blog or try out some of the labs, such as the Network Policies Lab and Connectivity Visibility Lab.
Enterprise-Grade Resilience
Isovalent Enterprise for Cilium includes capabilities for organizations that require the highest level of availability. These include features such as High Availability for DNS-aware network policy (video) and High Availability for the Cilium Egress Gateway (video).
Enterprise-Grade Support
Last but certainly not least, Isovalent Enterprise for Cilium includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that any issues are resolved promptly and efficiently. Customers also benefit from the help and training from professional services to deploy and manage Cilium in production environments.
Shortening time to value with Isovalent Enterprise for Cilium Support
Many fortune 500 companies pick Isovalent on their cloud native journey, to have the expert knowledge and support their business critical applications need. Learn what Isovalent’s support consists of, what our Customer Reliability Engineering team can do for you, and what “CuTEs” have to do with it.
Download BriefLearn More!
To learn more about Isovalent Enterprise for Cilium 1.15, check out the following links:
- Join the 1.15 release webinar to learn more about the latest and greatest open source and enterprise features of Isovalent Enterprise for Cilium and Cilium 1.15.
- Follow up with our hands-on technical workshop, where you’ll get hands-on with these new features!
- Request a Demo – Schedule a demo session with an Isovalent Solution Architect.
- Read more about the Cilium OSS 1.15 release
- Learn more about Isovalent & Cilium with our resource library – including guides, tutorials, and interactive labs.
Feature Status
All features introduced in this release are considered “Limited”. Here is a brief definition of the feature maturity levels:
- Stable: A feature that is appropriate for production use in a variety of supported configurations due to significant hardening from testing and use.
- Limited: A feature that is appropriate for production use only in specific scenarios and in close consultation with the Isovalent team.
- Beta: A feature that is not appropriate for production use but where user testing and feedback are requested. Customers should contact Isovalent support before considering Beta features.
Prior to joining Isovalent, Nico worked in many different roles—operations and support, design and architecture, and technical pre-sales—at companies such as HashiCorp, VMware, and Cisco.
In his current role, Nico focuses primarily on creating content to make networking a more approachable field and regularly speaks at events like KubeCon, VMworld, and Cisco Live.
Dean Lewis is a Senior Technical Marketing Engineer at Isovalent – the company behind the open-source cloud native solution Cilium.
Dean had a varied background working in the technology fields, from support to operations to architectural design and delivery at IT Solutions Providers based in the UK, before moving to VMware and focusing on cloud management and cloud native, which remains as his primary focus. You can find Dean in the past and present speaking at various Technology User Groups and Industry Conferences, as well as his personal blog.
Raphaël is a Senior Technical Marketing Engineer with Cloud Native networking and security specialists Isovalent, creators of the Cilium eBPF-based networking project. He works on Cilium, Hubble & Tetragon and the future of Cloud-Native networking & security using eBPF.
An early adept of the DevOps principle, he has been a practitioner of Configuration Management and Agile principles in Operations for many years, with a special involvement in the Puppet and Terraform communities over the years.