Back to blog

Isovalent Enterprise for Cilium 1.13: SRv6 L3VPN, Overlapping CIDR Support, FromFQDN in Network Policy, Grafana plugin and more!

Nico Vibert
Nico Vibert
Published: Updated: Cilium
Isovalent Enterprise for Cilium 1.13: SRv6 L3VPN, Overlapping CIDR Support, FromFQDN in Network Policy, Grafana plugin and more!

We are proud to announce Isovalent Enterprise for Cilium 1.13!

At the recent KubeCon Europe, we saw a phenomenal interest in Isovalent Enterprise for Cilium – the enterprise-grade and hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators.

The common thread of this release is simplicity. SRv6 L3VPN lets you simplify Telco network connections. Support for FQDN in ingress streamlines network policy management (no more hard-coded IP addresses!). The Cluster Mesh improvements enable you to not only connect clusters with overlapping IP addresses; you also have fewer operational steps to achieve global load-balancing across your meshed clusters. Isovalent Enterprise for Cilium can be installed on AKS clusters with a single click. Finally, the significant advancements in the Hubble and Grafana integration are bound to simplify the life of application developers and cluster operators.

This new 1.13 Enterprise release is based on the open source Cilium 1.13.2 release. If you’d like to learn more about the 1.13 release, read the release blog.

What we also heard at KubeCon is that many of you were not familiar yet with some of the advanced features that can be found in Isovalent Enterprise for Cilium. We will start this release post with a recap of these features before delving into the new capabilities!

Isovalent Enterprise for Cilium Core Features

Enterprise-grade Resilience

Isovalent Enterprise for Cilium includes capabilities for organizations that require the highest level of availability. This includes features such as High Availability for DNS-aware network policy (video) and High Availability for the Cilium Egress Gateway (video). 

Cilium and Isovalent helped our team to build a scalable Kubernetes platform which meets our demanding requirements to run mission-critical banking software in production!

Thomas Gosteli, Linux Systems Specialist, PostFinance

Platform Observability

Isovalent Enterprise for Cilium includes Role-Based Access Control (RBAC) for platform teams to let users access dashboards relevant to their namespaces, applications and environments.

This enables application teams to have self-service access to their Hubble UI Enterprise interface and troubleshoot application connectivity issues without involving SREs in the process.

Giving end users access to their own Hubble interface has significantly lower the burden on our support team to do network debugging.

Tobias Brunner, CTO, VSHN 

Forensics and Auditing

From the Hubble UI Enterprise, operators have the ability to create network policies based on actual cluster traffic. 

This popular feature is certainly one you should try for yourself:

Isovalent Enterprise for Cilium Lab

Create Network Policies based on actual cluster traffic!

Start Lab

Isovalent Enterprise for Cilium also includes Hubble Timescape – a time machine for observability data with powerful analytics capabilities.

While Hubble only includes real-time info, Hubble Timescape is an observability and analytics platform capable of storing & querying observability data that Cilium and Hubble collect. 

Isovalent Enterprise for Cilium also includes the ability to export logs to SIEM (Security Information and Event Management) platforms such as Splunk or an ELK (Elasticsearch, Logstash, and Kibana) stack.

Watch this video to learn more and if you would like to test security event exports with Isovalent Enterprise for Cilium, try out this lab:

TLS Visibility Lab

Export JSON event to SIEM with Isovalent Enterprise for Cilium!

Start Lab

Advanced Security Capabilities via Tetragon 

Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats.

We already highlighted in the Cilium 1.13 OSS blog post several unique features such as “File Integrity Monitoring” or networking observability enhancements at the socket level.

Head over to this lab to explore some of the Tetragon enterprise features.

Enterprise-grade Support

Last but certainly not least, Isovalent Enterprise for Cilium includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that any issues are resolved in a timely and efficient manner. Customers also benefit from the help and training from professional services to deploy and manage Cilium in production environments.


You can find out more about these features and more on the product page but let’s now go through some of the new ones!

FQDN Ingress Network Policy

Feature status: Limited*

As mentioned at the top, many of the features in this release are designed to simplify network operations. Let’s review the first one. FQDN Ingress Network Policy is an elegant feature that removes the need to hardcode CIDRs in Network Policies and greatly simplifies access inside your cluster. This is particularly useful to restrict access to a service or a cluster from a developer’s machine for example.

The FQDN Ingress Network Policy lets you specify FQDNs that are allowed to reach workloads within the cluster, with the FQDNs representing entities external to the cluster.

Two steps are necessary for this feature to work: An IsovalentFQDNGroup must be created with references to the target FQDNs, and a CiliumNetworkPolicy or CiliumClusterwideNetworkPolicy must be created that allows traffic from those FQDN groups.

First, configure an IsovalentFQDNGroup CRD with the FQDNs that need to be able to access workloads within your cluster.

apiVersion: isovalent.com/v1alpha1
kind: IsovalentFQDNGroup
metadata:
  name: example-fqdn-group
spec:
  fqdns:
    - "host.external.com"

Secondly, create a network policy for the target pods with a cidrGroupRef statement. The value of the cidrGroupRef must be identical to the .metadata.name field from the IsovalentFQDNGroup resource above. For example, the following policy would only allow inbound connections from host.external.com from outside the cluster to reach these endpoints in the default namespace with the labels app: my-service.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "example-cidr-group-ref-policy"
  namespace: "default"
spec:
  endpointSelector:
    matchLabels:
      app: my-service
  ingress:
  - fromCIDRSet:
    - cidrGroupRef: "example-fqdn-group"

To learn more, watch the video below:

SRv6 L3VPN support

Feature status: Beta*

We initially announced Segment Routing over IPv6 (SRv6) L3VPN support in the Cilium 1.13 release but it’s worth highlighting again as our telco users are very excited about this feature.

Isovalent Enterprise for Cilium now supports SRv6 L3VPN beta, enabling users to cross-connect Kubernetes worker nodes to other services and Kubernetes clusters over Segment Routing over IPv6 (SRv6). With this new feature, users can create virtual private networks that span multiple sites, providing secure and isolated connectivity between Kubernetes clusters, data centers, and public clouds.

SRv6 L3VPN offers a scalable and flexible solution for interconnecting multiple sites while maintaining end-to-end network slicing and service isolation.

Kubernetes Pods now have a single interface with only a default route. Nothing complex, no forwarding rules, no insanities of IPv6 static routes! And from a developer perspective, you just assign a VRF and the rest is really quite simple.

Daniel Bernier (Technical Director at Bell Canada)

To learn more, watch the introductory demo to SRv6 L3VPN on Cilium by Isovalent engineer Louis DeLosSantos or the KubeCon North America 2022 presentation below, with Louis and Daniel.

We will be publishing soon the Bell Canada case study on why SRv6 and Kubernetes simplify telco network configuration. Look out for it!

Phantom Services for Cluster Mesh

Feature status: Limited*

Cilium Cluster Mesh has become a very popular technology designed to provide load-balancing and service discovery across multiple clusters. It is also one of the technologies behind the recently-announced Cilium Mesh.

But as useful as it is, there remains an onus on the operator to create matching services on meshed clusters to achieve global load-balancing.

With the new Phantom Services feature, operating meshed clusters is getting easier. Let’s explain why.

In a cluster mesh, traffic can be load-balanced to pods across all clusters. This is achieved by using global services. A global service is a service that is created with the same spec in each cluster, and annotated with:

service.cilium.io/global: "true"

In Isovalent Enterprise for Cilium 1.13, you can now leverage Phantom Services. They consist of LoadBalancer services that are hosted in one cluster of the Mesh, but accessible to the whole Cluster Mesh without creating additional resources in the rest of the clusters. They support the same visibility and policy enforcement capabilities of standard global services.

A phantom service is a LoadBalancer service associated with at least one VIP, and annotated with:

service.isovalent.com/phantom: "true"

This makes the LoadBalancer IP addresses of the phantom service accessible from all clusters in the Cluster Mesh. Source IP addresses and identities are preserved for cross-cluster communication, enabling the same visibility and policy enforcement capabilities offered by Isovalent Enterprise for Cilium in case of global services.

Overlapping PodCIDR support for Cluster Mesh

Feature status: Beta*

As rigorous as you might have been with your subnet planning, there are still occasions where you might end up with clusters with overlapping IP ranges. The most common reason is, of course, mergers and acquisitions. In the past, interconnecting networks that were using the same RFC1918 range was already a common issue and unsurprisingly, it is still an issue with Kubernetes clusters.

The other reason why you may have overlapping CIDRs in your clusters is simply because… it’s simpler? We’ve heard from users with a huge churn of clusters that the operational burden of managing IP addresses is not negligible. It’s therefore attractive to build clusters with the same Pod CIDRs.

Whether you have overlapping cluster IP ranges on purpose or by accident, you will find the new Overlapping PodCIDR support for Cluster Mesh feature useful.

It allows clusters in the Cluster Mesh to have overlapping PodCIDR ranges. By using this feature – compatible with either the Global or Phantom service mentioned above – Pods can access the service backends on the remote clusters with overlapping PodCIDR ranges. L3 and L4 Network Policies are also supported.

The following diagram provides an overview of the packet flow of the Cluster Mesh with overlapping PodCIDR support, with Pod0 the source client. When the traffic crosses the cluster boundary, the source IP address is translated to the node IP. For intra-cluster communication, the source IP address is preserved.

In the walkthrough below, we have a client deployed on Cluster1 and a Deployment and Global Service server on both clusters. The Deployment consists of a couple of httpbin Pods (httpbin is an echo application that will reply with the original source IP of the Pod).

Notice the service.cilium.io/global: "true" annotation on the Service to set it as global.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
  labels:
    app: httpbin
spec:
  replicas: 2
  selector:
    matchLabels:
      app: httpbin
  template:
    metadata:
      labels:
        app: httpbin
    spec:
      containers:
      - name: httpbin
        image: kennethreitz/httpbin
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin-service
  annotations:
    service.cilium.io/global: "true"
spec:
  type: ClusterIP
  selector:
    app: httpbin
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Let’s check connectivity from the client in Cluster1:

kubectl exec -it deploy/netshoot -- curl http://httpbin-service.default.svc.cluster.local/get

When the backend of a local cluster is selected, the response shows the original source IP of the Pod.

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin-service.default.svc.cluster.local",
    "User-Agent": "curl/7.87.0"
  },
  "origin": "10.0.0.1",
  "url": "http://httpbin-service.default.svc.cluster.local/get",
}

When the backend of a remote cluster is selected, the response shows the IP of the node that source Pod is running.

{
  "args": {},
  "headers": {
    "Accept": "*/*",
    "Host": "httpbin-service.default.svc.cluster.local",
    "User-Agent": "curl/7.87.0"
  },
  "origin": "192.168.0.1",
  "url": "http://httpbin-service.default.svc.cluster.local/get",
}

That’s pretty cool, right?

Notice there are currently some limitations and restrictions with this particular model: there’s no support for NodePort and LoadBalancer Service (except Phantom Services) , L7 Network Policies and Transparent Encryption.

Hubble Enhancements

The Hubble components included in Isovalent Enterprise for Cilium have seen significant improvements on the areas of user experience, performance and the integration with the Grafana LGTM stack.

We will be publishing some additional articles on Hubble Open Source and its enterprise counterpart soon but here are some of the recent highlights:

Embedded Grafana Dashboard

Ever since the joint partnership between Isovalent and Grafana Labs has been announced, we have been working closely on improving the observability experience for operators. Let’s review some of the benefits that are now available for Isovalent Enterprise for Cilium users.

First, you can now embed Grafana dashboards directly into Hubble UI Enterprise! For users who prefer to see everything from the Hubble UI Enterprise, this lets them access all the relevant information from one window. Users can now monitor not only the health of the network but also the health of the Cilium deployment itself. See the gallery below to see some of the dashboards available:

Hubble datasource plugin for Grafana

If you’re more of a Grafana power user and would like to see some of the really useful data collected by Hubble within Grafana, then you will be pleased to know the Hubble datasource plugin for Grafana is now available in the Grafana catalog!

First – the Service Map. The Service Map query renders an interactive service map from Hubble L7 metrics. As you can see in the short video below, you now can visualize service dependencies as well as the essential HTTP metrics like HTTP Status Code, request rate and duration.

The Grafana plugin can also show historical Hubble network flows by leveraging Hubble Timescape. You can then explore and search through your network flows. In addition, for applications instrumented through Open Telemetry, you can find the Trace ID, drill down through the exemplar and find, for example, the micro-service that was responsible for slowing down requests in your application.

Likewise, when looking through the service map, you can select an endpoint and visualize the relevant Prometheus-scraped metrics.

Dark Mode

Finally, for those of you who prefer a dark theme, you can now see the beautiful Hubble UI Enterprise interface in a dark mode.

Let us know which theme you will choose! I think I might go down to the dark side of Hubble.

Endpoint Routes with BPF Host Routing

Feature status: Stable*

Now it is possible to enable Endpoint Routes and BPF Host Routing simultaneously. The BPF Host Routing allows to fully bypass iptables and the upper host stack, and to achieve a faster network namespace switch compared to regular veth device operation. This option is automatically enabled if your kernel supports it (mainline kernel >= 5.10).

Azure Marketplace

We also recently announced that Isovalent Enterprise for Cilium is now available to Azure Kubernetes Service (AKS) customers as a one-click upgrade in the Microsoft Azure Marketplace. Find Isovalent Enterprise for Cilium on the Azure marketplace here and read our detailed Azure partner page.

Learn More!

There are many other features available in Isovalent Enterprise for Cilium! If you’d like to learn more, check out the following links:

  • Join the 1.13 release webinar – with Thomas Graf, Co-Creator of Cilium, CTO and Co-Founder of Isovalent, to learn more about the latest and greatest open source and enterprise features of Isovalent Enterprise for Cilium and Cilium 1.13.
  • Request a Demo – Schedule a demo session with an Isovalent Solution Architect
  • Cilium 1.13 Release Blog – Gateway API, mTLS datapath, Service Mesh, BIG TCP, SBOM, SNI NetworkPolicy, …
  • Learn more about Isovalent & Cilium – The resource library lists guides, tutorials, and interactive labs.

Feature Status

Here is a brief definition of the feature maturity levels used in this blog post:

  • Stable: A feature that is appropriate for production use in a variety of supported configurations due to significant hardening from testing and use.
  • Limited: A feature that is appropriate for production use only in specific scenarios and in close consultation with the Isovalent team.
  • Beta: A feature that is not appropriate for production use, but where user testing and feedback is requested. Customers should contact Isovalent support before considering Beta features.
Nico Vibert
AuthorNico VibertSenior Staff Technical Marketing Engineer

Industry insights you won’t delete. Delivered to your inbox weekly.