Back to blog

Isovalent Enterprise for Cilium 1.15: eBPF-based IP Multicast, BGP support for Egress Gateway, Network Policy Change Tracker, and more!

Nico Vibert
Nico Vibert
Dean Lewis
Dean Lewis
Raphaël Pinson
Raphaël Pinson
Published: Updated: Cilium
eBee Wizard - Isovalent Enterprise for Cilium 1.15 - Multicast - cover

We are delighted to announce Isovalent Enterprise for Cilium 1.15, introducing IP Multicast!

Isovalent Enterprise for Cilium is the hardened, enterprise-grade, and 24×7-supported version of the eBPF-based cloud networking platform Cilium. In addition to all features available in the open-source version of Cilium, the enterprise edition includes advanced networking, security, and observability features popular with enterprises and telco providers. 

Isovalent’s new enterprise release of Cilium now supports IP Multicast and the ability to send traffic to a subset of recipients instead of forwarding packets to a single machine (unicast) or all machines (broadcast).

Popular for publisher/subscriber-type applications (audio/video streaming, software distribution, and especially financial market data feeds) and IoT/edge scenarios, multicast has been widely adopted across traditional IP networks for decades. With this new Isovalent Enterprise for Cilium release, Kubernetes users can now distribute traffic efficiently across the cluster through Cilium’s eBPF-based multicast datapath.

Isovalent Enterprise for Cilium 1.15 also introduces new Egress features: Cilium BGP can now advertise Egress Gateway IP addresses and Egress Gateway is now topology-aware, helping users minimize data transfer costs and optimize routing.

To assist operators with troubleshooting and auditing network policies, we are introducing a Policy Change Tracker within the Hubble UI. Users will be able to monitor and compare network policy versions, quickly identify breaking changes, and easily restore previous policies.

What is new in Cilium & Isovalent Enterprise for Cilium 1.15?

This release is based on the open source release of Cilium 1.15 and includes all 1.15 features. As a reminder, Cilium 1.15 introduced the following enhancements and improvements:

  • Gateway API 1.0 Support: Cilium now supports Gateway API 1.0 (more details)
  • Gateway API gRPC Support: Cilium can now route gRPC traffic, using the Gateway API (more details)
  • Annotation Propagation from GW to LB: Both the Cilium Ingress and Gateway API can propagate annotations to the Kubernetes LoadBalancer Service (more details)
  • Ingress Network Policy: Enforce ingress network policies for traffic inbound via Ingress + GatewayAPI (more details)
  • BGP Security: Support for MD5-based session authentication has landed (more details)
  • BGP Traffic Engineering: New BGP attributes (communities and local preference) are now supported (more details)
  • Cluster Mesh Twofold Scale Increase: Cluster Mesh now supports 511 meshed clusters (more details)
  • Terraform/OpenTofu & Pulumi Cilium Provider: You can now deploy Cilium using your favourite Infra-As-Code tool (more details)
  • Hubble Redact: Remove sensitive information from Hubble output (more details)
  • Hubble Exporter: Export Hubble flows directly to logs for shipping to any SIEM platform (more details)
  • eBPF-based Multicast: Multicast is now supported in Cilium’s data path, controlled using new CLI commands for multicast management (more details)

In addition to the above, the Isovalent Enterprise release provides these new capabilities:

  • Fully managed eBPF-based Multicast: Custom Resource Definition-based control plane to simplify and automate multicast operations (more details)
  • BGP support for Egress Gateway: BGP can now advertise Egress Gateway IP addresses (more details)
  • Topology-Aware Egress Routing: The egress gateway selection process can now be aware of the physical topology (more details)
  • Hubble Policy Change Tracker: The Hubble UI now includes a timeline of changes to Network Policies to help users audit and troubleshoot (more details)
  • Cluster Mesh Mixed Routing Mode: Clusters can now be meshed even if they run different datapath modes
  • Control Default Deny Policy behavior: Ability to safely roll out broad-based network policies, without the risk of disrupting existing traffic (more details)
  • Cilium on Red Hat OpenShift on AWS (ROSA): Bring Cilium to your managed Red Hat OpenShift instances running in AWS
  • Cilium on Talos Linux: Isovalent engineering has now validated and built a testing framework for deploying this and future versions of Isovalent Enterprise for Cilium to Talos Linux.

eBPF-based IP Multicast

In the late 1980s, Dr Steve Deering was working on a project that required him to send a message from one computer to a group of computers across a Layer 3 network. Dr Deering’s research led him to the conclusion that routing protocols of that time did not support this functionality.

Dr Deering eventually published a doctoral thesis on multicast and is thus known as the inventor of IP multicast (anecdotally, he also happens to be the lead designer of IPv6).

IP Multicast is about “sending a message from a single source to selected multiple destinations across a Layer 3 network in one data stream” (even after all these years, my CCIE 3.0 study book was worth keeping).

Delivery MethodMessage Delivery Mechanism
UnicastMessage sent from one source to one destination
BroadcastMessage sent from one source to all the destinations on the local network
MulticastMessage sent from one source to selected multiple destinations across a routed network in one data stream

Let’s use a few analogies to explain these concepts. Unicast is like sending a personal letter or making a phone call. Broadcast is comparable to a public announcement over a loudspeaker. Multicast is similar to tuning in to a radio station or even subscribing to a podcast; only the audience interested in the stream will receive it.

Many of the use cases IP multicasting was popular with — optimal distribution of audio/video, software updates, and financial market data feeds — are just as valid today in Kubernetes.

IP Multicast Overview

Let’s review the fundamental aspects of multicast in traditional IP networks. Multicast relies on two simple concepts: a sender (the source, emitting the traffic to a specific multicast group) and a set of subscribers (receiving the traffic). The subscribers need to join the multicast IP address, often called a multicast group.

Multicast layer 3 addresses range from 224.0.0.0-239.255.255.255 (with 239.0.0.0-239.255.255.255 the well-known range for private addresses).

The network requires a mechanism to let subscribers inform the rest of the network that they’d like to join (or leave) a specific group. To do so, they would typically send Internet Group Management Protocol (IGMP) join/leave packets to the network, which would then process (or snoop) the packets and distribute the traffic only onto the interfaces where subscribers have been seen.

As illustrated below, the benefits are reflected in smarter network utilization. Imagine a scenario where you have 4 clients located across 4 nodes, with 2 subscribers interested in a network feed and 2 non-subscribers.

  • In unicast, when the source starts emitting traffic, it would need to send 2 copies of the packets, which would then make their way to the subscribers.
  • Broadcasting can be wasteful. The source sends one copy of the packet, which is then replicated and sent to all hosts, even those that don’t need to receive it.
  • Multicast is the most efficient – a single packet is sent and replicated only to the nodes with subscribers.

Let’s now see how Isovalent Enterprise for Cilium supports multicast for Kubernetes pods.

IP Multicast with Isovalent Enterprise for Cilium

Let’s enable the multicast feature with the Cilium CLI:

$ cilium config set multicast-enabled true
✨ Patching ConfigMap cilium-config with multicast-enabled=true...
♻️  Restarted Cilium pods
$ cilium config view | grep multicast-enabled
multicast-enabled                                    true

The multicast data path is now enabled. It can be programmed through two methods:

  • Manually (for open source and enterprise users), using the CLI on the Cilium agent
  • Via a CustomResourceDefinition (for Isovalent Enterprise for Cilium customers only)

Let’s see how multicast in Kubernetes is simple, with the IsovalentMulticastGroup CRD. We will create 3 multicast groups with the following manifest.

apiVersion: isovalent.com/v1alpha1
kind: IsovalentMulticastGroup
metadata:
  name: multicastgroups
  namespace: default
spec:
  groupAddrs:
    - "225.0.0.10"
    - "225.0.0.11"
    - "225.0.0.12"

Once this manifest has been deployed and these groups configured, Cilium will snoop IGMP join and leave messages with the group address (IGMPv2 and IGMPv3 are both supported).

At first, there’s no endpoint subscribed to it:

$ kubectl exec daemonsets/cilium -n kube-system -c cilium-agent -- \
  cilium bpf multicast subscriber list all
Group           Subscriber      Type            
225.0.0.10      
225.0.0.11      
225.0.0.12    

Let’s deploy a couple of pods and subscribe them to the 225.0.0.11 multicast group:

$ kubectl run -ti --rm sub --image nicolaka/netshoot -- \
  socat UDP4-RECVFROM:6666,reuseaddr,ip-add-membership=225.0.0.11:0.0.0.0,fork -
$ kubectl run -ti --rm sub2 --image nicolaka/netshoot -- \
  socat UDP4-RECVFROM:6666,reuseaddr,ip-add-membership=225.0.0.11:0.0.0.0,fork -

When checking on the Cilium agent on the node where the pod is deployed, we can now see our subscribed pods:

$ kubectl exec $CILIUM_POD -n kube-system -c cilium-agent -- \
  cilium bpf multicast subscriber list all
Group           Subscriber      Type            
225.0.0.10      
225.0.0.11      10.0.0.10       Local Endpoint  
                10.0.0.180      Local Endpoint 
225.0.0.12

To show that multicast works across the cluster, let’s jump on a pod on a different node than our subscribers:

$ kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE     IP           NODE           NOMINATED NODE   READINESS GATES
netshoot-65f864f868-9c7h6   1/1     Running   0          16s     10.0.0.231   kind-worker2   <none>           <none>
netshoot-65f864f868-bxmlp   1/1     Running   0          16s     10.0.2.49    kind-worker    <none>           <none>
netshoot-65f864f868-cz6sd   1/1     Running   0          16s     10.0.0.249   kind-worker2   <none>           <none>
netshoot-65f864f868-sjx4v   1/1     Running   0          16s     10.0.2.83    kind-worker    <none>           <none>
sub                         1/1     Running   0          3m6s    10.0.0.180   kind-worker2   <none>           <none>
sub2                        1/1     Running   0          2m41s   10.0.0.10    kind-worker2   <none>           <none>

Let’s transmit some traffic to the multicast group:

$ kubectl exec -it netshoot-65f864f868-bxmlp -- bash
netshoot-65f864f868-bxmlp:~# 
netshoot-65f864f868-bxmlp:~#  echo "hi! I am sending traffic to the multicast group 225.0.0.11 - do you receive?" | socat -u - UDP4-DATAGRAM:225.0.0.11:6666
netshoot-65f864f868-bxmlp:~#

We can see the multicast traffic arriving at our subscribed pods and only to them:

$ kubectl run -ti --rm sub --image nicolaka/netshoot -- \
  socat UDP4-RECVFROM:6666,reuseaddr,ip-add-membership=225.0.0.11:0.0.0.0,fork -
If you don't see a command prompt, try pressing enter.

hi! I am sending traffic to the multicast group 225.0.0.11 - do you receive?

$ kubectl run -ti --rm sub2 --image nicolaka/netshoot --   socat UDP4-RECVFROM:6666,reuseaddr,ip-a
dd-membership=225.0.0.11:0.0.0.0,fork -
If you don't see a command prompt, try pressing enter.

hi! I am sending traffic to the multicast group 225.0.0.11 - do you receive

To observe multicast traffic (which can be a challenging aspect in traditional networks), you can even use Isovalent Enterprise for Tetragon to monitor multicast traffic and display them via Prometheus and Grafana. You can even visualize metrics for a specific multicast group.

If you’d like to learn more, you can start our new Multicast lab or schedule a demo with our experts.

Multicast Lab

In this lab, learn about the new Isovalent Enterprise for Cilium 1.15 feature - Multicast!

Start Lab

BGP support for Egress Gateway

Egress Gateway is a popular Cilium feature: the ability to force traffic to exit via a specific egress node and to be masqueraded with a predictable source IP enables engineers to control and secure traffic leaving Kubernetes clusters. Isovalent Enterprise for Cilium includes High Availability support for organizations with mission-critical requirements.

Egress Gateway

What is often overlooked is the return traffic back to the cluster. The specific egress IPs need to be known to the external network. Until now, there was no automated way of advertising these IPs to the external network.

Isovalent Enterprise for Cilium addresses by introducing BGP support for Egress Gateway IPs.

BGP is one of the most commonly used methods to connect Kubernetes clusters to existing networks and Cilium’s built-in implementation of BGP enables seamless connectivity between Kubernetes services and the existing network.

This new feature introduces support for BGP in advertising the Egress NAT IPs to external routers. This is especially useful in scenarios where an extra IP on an egress NIC has been configured and, more broadly, when the egress IP is not reachable by the underlay by default.

Egress Gateway and BGP support
Egress Gateway and BGP support

To learn more about this feature and the broader design considerations of External Traffic Engineering, watch Piotr’s and Michael’s session from the recent Cloud Native Rejekts conference.

Topology-Aware Egress Routing

Organizations often deploy clusters across multiple zones to avoid area-wide failures. While it improves resilience and availability, it introduces potential sub-optimal routing, increased latency and extra data transfer costs.

By default, Kubernetes and tools like Cilium are unaware of the underlying areas and physical locations where clusters and nodes run. Concepts such as Topology Aware Routing (also known as Topology Aware Hints) have been introduced to address this shortcoming.

Isovalent Enterprise for Cilium 1.15 adds physical topology awareness to the egress gateway selection process.

Users can now rely on the well-known Node label topology.kubernetes.io/zone to augment the default traffic distribution in a group of HA egress gateways. This helps with optimising latency and reducing cross-zone traffic costs.

Topology Aware Egress Gateway

To familiarize yourself with Egress Gateway and some of the features highlighted in this post, why don’t you start our Egress Gateway lab?

Egress Gateway Lab

Ensure your Kubernetes traffic exists the cluster with a predictable IP by using the Egress Gateway feature!

Start Egress Gateway Lab

Networking Observability with Hubble

As mentioned at the start of this blog post, Cilium 1.15 OSS features are also available in this enterprise release, this also includes enhancements and features for Hubble.

  • Hubble CLI enhancements: new CLI enhancements provide easier filtering of flows across your Kubernetes networking platform (more details)
  • Hubble Redact: Remove sensitive information from Hubble output (more details)
  • Hubble Exporter: Export Hubble flows directly to logs, for shipping to any SIEM platform (more details)

Below, you can check out one of our latest videos, which covers the Hubble Exporter feature in more detail.

Hubble Policy Change Tracker

We have recently published a series of blog posts to help Kubernetes users improve the security of their clusters by adopting network policies:

In addition to the tutorials above, you can leverage Isovalent Enterprise for Cilium and the enterprise edition of Hubble to assist you with troubleshooting network policies. We’ve talked previously about how Hubble Enterprise can improve the cluster’s security posture with network policies and provide historical data for forensics purposes.

In this new release, we are introducing a built-in Policy Change Tracker, which will be a very welcome tool for troubleshooting network policies and auditing changes. Let’s take you through an example.

Imagine you get notified about application connectivity issues. Using the Hubble CLI or interface, you can quickly assess that flows to the zookeeper and the coreapi services are being dropped. You can also see that there have been three changes to Network Policies indicated by the orange dots:

Clicking on one of these dots will show you what change happened at that moment. In our example, the middle dot indicates that the allow-all-within-namespace CiliumNetworkPolicy was deleted. This was when flows to zookeeper service started to drop.

With the Policy Change Tracker, you can identify when and how network policies were modified on a namespace basis.

For our example, clicking on the Network Policy change indicator shows you the version of the allow-all-within-namespace CiliumNetworkPolicy just before it was deleted. The policy allowed flows from or to any endpoint in the namespace.

The Network Policy view can also show you what exactly has changed between the two versions by clicking on the button just next to the version selection dropdown.

In our example, you can see that the destination port in the l7-ingress-visibility CiliumNetworkPolicy was changed. This is what caused the dropped flows to the coreapi.

With this feature, you can spot unintended changes to network policies and revert them by downloading previous versions.

Check out the feature in action with a walkthrough from our Observability specialist, Dean Lewis.

Control Cilium Network Policy Default Deny behaviour

Cilium provides fantastic granular network policy control across your cloud native platform beyond the scope and features of Kubernetes network policies. It’s one of the main reasons customers use Cilium in their environments.

In this enterprise release, we introduce the ability to allow users to create network policies that do not implicitly set endpoints into a default-deny mode. This feature will become available in Cilium OSS 1.16; however, as part of our Enterprise support value, we have backported this new feature into the Isovalent Enterprise for Cilium 1.15 release.

If we consider a large Kubernetes cluster, made up of multi-tenant workloads, the owner of the platform may wish to start to introduce new policies that:

  • Proxy all DNS requests for monitoring purposes
  • Deny access to sensitive IPs, for example, Cloud Provider metadata APIs
  • Introduce new cluster-wide tooling, such as security scanners or data protection software

Cilium Network Policies already provide the necessary policy language and features to implement these types of policies today. However, applying such broad policies to existing platforms carries an operational risk.

These new broader policies may be the first policy to apply to an existing workload endpoint in the cluster. Therefore, the platform owner may unintentionally block traffic for workloads, that currently operate in a default allow-all mode, as no previous policy was applied to them. Once the new broad policy is applied, anything that does not match the ruleset is automatically denied.

This new feature, which controls the behavior of default deny for network policies, allows platform owners to safely roll out new broad-based network policies, either namespace or cluster-wide, without the risk of disrupting existing traffic.

The below example policy is created to intercept all DNS traffic for observability purposes. However, the enableDefaultDeny for both ingress and egress traffic is set to false. This ensures that no traffic is blocked, even if this policy is the first one to be applied to a workload. A network or security administrator can apply this policy to the platform, without the risk of denying legitimate traffic.

apiVersion: cilium.io/v2
kind: CiliumClusterwideNetworkPolicy
metadata:
  name: intercept-all-dns
spec:
  endpointSelector:
    matchExpressions:
      - key: "io.kubernetes.pod.namespace"
        operator: "NotIn"
        values:
        - "kube-system"
      - key: "k8s-app"
        operator: "NotIn"
        values:
        - kube-dns
  enableDefaultDeny:
    egress: false
    ingress: false
  egress:
    - toEndpoints:
        - matchLabels:
            io.kubernetes.pod.namespace: kube-system
            k8s-app: kube-dns
      toPorts:
        - ports:
          - port: "53"
            protocol: TCP
          - port: "53"
            protocol: UDP
          rules:
            dns:
              - matchPattern: "*"

The video below captures using this example policy and shows the operational risk associated with applying a broad network policy to existing workloads and then avoiding that risk by using this new EnableDefaultDeny feature in Cilium Network Policies.

Core Isovalent Enterprise for Cilium features

Advanced Networking Capabilities

In addition to all the core networking features available in the open source edition of Cilium, Isovalent Enterprise for Cilium also includes advanced routing and connectivity features popular with large enterprises and Telco, including:

Platform Observability, Forensics, and Auditing

Isovalent Enterprise for Cilium includes Role-Based Access Control (RBAC), enabling platform teams to provide users access to network data and dashboards relevant to their namespaces, applications, and environments.

From the Hubble Enterprise UI, operators can use the built-in Network Policy Editor to create network policies based on actual cluster traffic.

Isovalent Enterprise for Cilium also includes Hubble Timescape – a time machine with powerful analytics capabilities for observability data.

While Hubble OSS only includes real-time information, Hubble Timescape is an observability and analytics platform capable of storing & querying observability data that Cilium and Hubble collect. 

Isovalent Enterprise for Cilium also supports logs export to SIEM (Security Information and Event Management) platforms such as Splunk or an ELK (Elasticsearch, Logstash, and Kibana) stack.

To explore some of the Hubble enterprise features, check out the Hubble for the Enterprise blog or try out some of the labs, such as the Network Policies Lab and Connectivity Visibility Lab.

Enterprise-Grade Resilience

Isovalent Enterprise for Cilium includes capabilities for organizations that require the highest level of availability. These include features such as High Availability for DNS-aware network policy (video) and High Availability for the Cilium Egress Gateway (video).

Enterprise-Grade Support

Last but certainly not least, Isovalent Enterprise for Cilium includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that any issues are resolved promptly and efficiently. Customers also benefit from the help and training from professional services to deploy and manage Cilium in production environments.

Shortening time to value with Isovalent Enterprise for Cilium Support

Many fortune 500 companies pick Isovalent on their cloud native journey, to have the expert knowledge and support their business critical applications need. Learn what Isovalent’s support consists of, what our Customer Reliability Engineering team can do for you, and what “CuTEs” have to do with it.

Download Brief

Learn More!

To learn more about Isovalent Enterprise for Cilium 1.15, check out the following links:

Feature Status

All features introduced in this release are considered “Limited”. Here is a brief definition of the feature maturity levels:

  • Stable: A feature that is appropriate for production use in a variety of supported configurations due to significant hardening from testing and use.
  • Limited: A feature that is appropriate for production use only in specific scenarios and in close consultation with the Isovalent team.
  • Beta: A feature that is not appropriate for production use but where user testing and feedback are requested. Customers should contact Isovalent support before considering Beta features.
Nico Vibert
AuthorNico VibertSenior Staff Technical Marketing Engineer
Dean Lewis
AuthorDean LewisSenior Technical Marketing Engineer
Raphaël Pinson
AuthorRaphaël PinsonSenior Technical Marketing Engineer

Related

Labs

Isovalent Enterprise for Cilium: Multicast

Multicast support in Kubernetes has finally come to Cilium! In this lab, you will discover how to set it up, take advantage of it, and observe multicast traffic in Kubernetes, using Cilium and Tetragon in Isovalent Enterprise.

Briefs

Shortening time to value with Isovalent Enterprise for Cilium Support

Many fortune 500 companies pick Isovalent on their cloud native journey, to have the expert knowledge and support their business critical applications need. Learn what Isovalent’s support consists of, what our Customer Reliability Engineering team can do for you, and what “CuTEs” have to do with it.

By
Dean Lewis
Blogs

Benefits of Isovalent Enterprise for Cilium Support and replica Customer Testing Environments

Learn how Isovalent Enterprise Support helps customers achieve success using hardened cilium distributions & customer replica testing environments.

By
Dean Lewis

Industry insights you won’t delete. Delivered to your inbox weekly.