Back to blog

What’s New in Networking for Kubernetes in the Isovalent Platform 1.17

Dean Lewis
Dean Lewis
Nico Vibert
Nico Vibert
Published: Updated: Isovalent
Isovalent Networking for Kubernetes 1.17

The 1.17 release of Isovalent Networking for Kubernetes introduces a broad set of updates designed to support more predictable, secure, and scalable operations. Whether you’re building out advanced connectivity for large-scale workloads, replatforming from legacy CNIs, or enforcing stricter policy boundaries in multi-tenant environments, this release focuses on getting powerful capabilities ready for day-to-day use.

Key updates include:

  • Simplified Egress Management: The Standalone Egress Gateway allows you to route traffic from pods through dedicated external nodes without requiring them to be part of the Kubernetes cluster. This opens up new deployment designs in both cloud and on-prem environments.
  • Enhanced Policy Enforcement: The introduction of Ordered Policies and Lockdown mode provides administrators with more granular control over network policies.
  • Streamlined Migrations: Tools and documentation have been updated to facilitate smoother transitions from Calico, minimizing downtime and operational overhead.
  • Gateway API Policy Control: Cilium now supports parameterized GatewayClass configurations, namespaced ingress policies, and per-route filtering through a new CRD. These features enable fine-grained, multi-tenant access control without complex application logic.
  • Easier Observability with Timescape: Timescape can now be deployed as part of the Cilium Helm chart in an integrated mode that simplifies setup, replacing the need for Hubble Relay, and introduces a redesigned UI and Push API for real-time visibility.

These advancements are designed to address real-world challenges faced by platform teams, enhancing the reliability and efficiency of Kubernetes networking.

To learn more about this new release, sign up for the webinar:

What’s New in Isovalent Networking for Kubernetes 1.17 Webinar

Get a front-row seat to the latest enterprise-grade Cilium release. Join us to explore how Isovalent Networking for Kubernetes 1.17 helps platform teams harden security, improve scalability, and streamline operations in production Kubernetes clusters.

Register for the webinar!

Let’s take a closer look at what’s new.

Standalone Egress Gateway

The Cilium Egress Gateway provides the capability to redirect traffic originating in pods and destined to specific CIDRs outside the cluster to be routed through particular nodes. 

When the Egress Gateway feature is enabled and appropriate policies are in place, outbound traffic from the cluster is masqueraded with deterministic, gateway-assigned IP addresses. This enables consistent source IPs for traffic leaving the cluster: useful not only for accessing legacy systems, but also for allowing non-Kubernetes-aware firewalls to enforce controls based on predictable, namespace or pod-specific IPs. 

In previous releases, the Egress Gateway nodes were designed to be part of the Kubernetes Cluster where Cilium was running. In 1.17, we are introducing the support for standalone Egress Gateway nodes that are not part of the Kubernetes Cluster.

The introduction of the Standalone Egress Gateway feature opens up a number of new design opportunities when running deployments in your datacenter or public cloud environment. When this feature is enabled, a synthetic CiliumNode object is created within the workload cluster running Cilium. This virtual representation enables Egress Gateway Policy object (IsovalentEgressGatewayPolicy) to correctly identify and route traffic through the appropriate standalone Egress Gateway node.

apiVersion: cilium.io/v2
kind: CiliumNode
metadata:
  name: <gateway-name>
  annotations:
    cilium.io/do-not-gc: "true"
    ipam.cilium.io/ignore: "true"
  labels:
    # Labels matched via iegp.spec.egressGroups.nodeSelector
    egw-node: "true"
spec:
  addresses:
  - ip: <gateway-node-ip>
    type: InternalIP

The cilium.io/do-not-gc=true and ipam.cilium.io/ignore=true annotations prevent the Cilium Operator from reconciling the synthetic objects, given that they do not correspond to nodes that are part of the Kubernetes cluster.

The below video shows you this new feature in action!

BGP Unnumbered

Connecting your Kubernetes clusters to the broader network typically involves establishing BGP sessions between cluster nodes and upstream peers (often Top of Rack (ToR) switches). While statically defining peer IP addresses is manageable for most environments, it can become operationally burdensome in large-scale or highly dynamic infrastructures.

This challenge isn’t unique to Kubernetes. In traditional networking, BGP Unnumbered, also known as BGP auto-discovery or BGP auto-peering, has long been a solution. It simplifies configuration by enabling peers to discover each other dynamically over directly connected interfaces, removing the need to manually assign and manage peer IPs.

Isovalent 1.17 brings BGP Unnumbered support to Kubernetes, allowing clusters to seamlessly peer with network devices using link-local IPv6 addresses and reducing operational overhead.

Instead of specifying peer IP addresses in your IsovalentBGPClusterConfig (with spec.bgpInstances.peers.peerAddress), you only need to specify the physical node interface:

apiVersion: isovalent.com/v1alpha1
kind: IsovalentBGPClusterConfig
metadata:
  name: cilium-bgp
spec:
  nodeSelector:
    matchLabels:
      rack: rack0
  bgpInstances:
  - name: "instance-65001"
    localASN: 65001
    peers:
    - name: "peer-65001-tor1"
      autoDiscovery:
        mode: Unnumbered
        unnumbered:
          interface: eth0 ## specify the physical node interface rather than multiple peer IP addresses
      peerConfigRef:
        name: "cilium-peer"

With this configuration, Cilium will use the link-local IPv6 address of a neighboring router discovered on the configured interface for the peering. To allow this auto-discovery, the neighboring router should be configured for sending Router Advertisement messages (ICMPv6 message as defined in RFC4861 section 4.2) on the interface connecting to the Cilium node.

By enabling the unnumbered peering, Cilium will also start sending Router Advertisement messages on the configured interface, so that the neighboring router can learn about Cilium node’s link-local IPv6 address in a similar way.

BGP Unnumbered streamlines BGP configuration in large-scale or ephemeral environments where static peer definitions are impractical and reduces manual configuration overhead, allowing you to integrate your clusters more dynamically with upstream routers.

BGP Prefix Aggregation

Another BGP improvement is the support for Prefix Aggregation. Also known as “Route Summarization”, it consolidates the Kubernetes Service IP addresses advertised to peers into a large “summary prefix”. 

By default, the Cilium BGP Control Plane advertises exact routes for the Service VIPs ( /32 for IPv4 or /128 prefixes for IPv6). For large environments where each cluster might advertise hundreds of Service IPs, it could have consequences. The network devices learning these routes and installing them in their routing table might run out of space in their TCAM/FIB table.

In the example below, 50 individual prefixes are advertised over to BGP peers:

$ cilium bgp routes advertised ipv4 unicast peer 192.168.121.201 --node t1-1
Node   VRouter   Peer              Prefix            NextHop           Age     Attrs
t1-1   64512     192.168.121.201   100.64.0.101/32   192.168.121.151   33m8s   [{Origin: i} {AsPath: } {Nexthop: 192.168.121.151} {LocalPref: 100}]   
       64512     192.168.121.201   100.64.0.102/32   192.168.121.151   6m23s   [{Origin: i} {AsPath: } {Nexthop: 192.168.121.151} {LocalPref: 100}]  
[....] 
       64512     192.168.121.201   100.64.0.149/32   192.168.121.151   3m26s   [{Origin: i} {AsPath: } {Nexthop: 192.168.121.151} {LocalPref: 100}]   
       64512     192.168.121.201   100.64.0.150/32   192.168.121.151   3m26s   [{Origin: i} {AsPath: } {Nexthop: 192.168.121.151} {LocalPref: 100}] 

With BGP Prefix Aggregation, you can now group these service IPs into a larger prefix, advertising only a supernet. The BGPAdvertisement CRD now supports the aggregation feature, which can be customized by specifying the prefix aggregation length:

apiVersion: isovalent.com/v1alpha1
kind: IsovalentBGPAdvertisement
metadata:
  name: ilb-service-advertisement
  labels:
    advertisement: ilb
spec:
  advertisements:
    - advertisementType: Service
      service:
        aggregationLength: 24 ## The prefix length of the summary route can be customized.
        addresses:
          - LoadBalancerIP

With a /24 prefix length, a single /24 prefix is advertised to our peer instead of 50 individual prefixes.

When using a /27 prefix length instead, two prefixes are advertised to our neighbours:

$ cilium bgp routes advertised ipv4 unicast peer 192.168.121.201 --node t1-1
Node   VRouter   Peer              Prefix            NextHop           Age   Attrs
t1-1   64512     192.168.121.201   100.64.0.128/27   192.168.121.151   2s    [{Origin: i} {AsPath: } {Nexthop: 192.168.121.151} {LocalPref: 100}]   
       64512     192.168.121.201   100.64.0.96/27    192.168.121.151   2s    [{Origin: i} {AsPath: } {Nexthop: 192.168.121.151} {LocalPref: 100}] 

Prefix Aggregation is a welcome addition for anyone running Kubernetes at scale. 

Rather than flooding your network fabric with hundreds of individual /32s or /128s, this feature allows you to advertise summarized prefixes, greatly reducing route churn and conserving valuable space in hardware tables like the FIB or TCAM. 

Watch the demo to see how BGP Prefix Aggregation reduces route table size and streamlines Kubernetes network connectivity.

Migrating From Calico Network Policies

While many users adopt the Isovalent platform as part of greenfield deployments, we also aim to support teams migrating from existing production clusters. We have shared guidance about migration from other CNIs such as Flannel, including two hands-on labs (Flannel to Cilium and Calico to Cilium) demonstrating how to move to Cilium with minimal downtime. Isovalent customers also benefit from the expertise of our Customer Success Architects during onboarding and migration planning.

One of the more complex aspects of a migration, especially from Calico, is network policy translation. While both Calico and Cilium extend Kubernetes Network Policies, they differ in syntax, behavior, and enforcement models. A key difference is that Calico supports policy ordering through an order field, allowing fine-grained control over the sequence in which policies are evaluated. In contrast, Cilium Network Policies (CNPs) follow a different model, which we explore in depth in the Cilium Network Policy Deep Dive eBook.

To make the migration process smoother, the Isovalent platform now includes two complementary enhancements:

Isovalent Network Policy (INP)

INP is an extension of Cilium Network Policy that supports explicit policy ordering. While CNP remains the recommended choice for most use cases, INP is particularly helpful in scenarios where you want to:

  • You need a blanket policy that overrides others with high priority
  • You want to define specific exceptions with lower order values
  • You are migrating from policy models that rely on ordering (e.g., Calico)

By default, policies have an order of 0 (that includes API types without an order field, such as standard Cilium Network Policies). Lower-ordered rules take precedence over higher orders (5 takes precedence over 10). Let’s go through a couple of examples. 

With the policy below, you can deny all traffic from a non-production to production namespace, while making an exception to allow access to a directory-service in the production namespace.
As the exception rule has a -2 order (INPs support negative orders), it overrides the deny policy set with order -1.

apiVersion: isovalent.com/v1alpha1
kind: IsovalentClusterwideNetworkPolicy
metadata:
  name: restrict-production
specs:
- endpointSelector:
    matchLabels:
      io.cilium.k8s.namespace.labels.role: production
      k8s-app: directory-service
  ingress:
    - fromEndpoints: 
      - {} # allows all endpoints clusterwide.
      toPorts:
        - ports:
          - port: "8080"
            protocol: TCP
  enableDefaultDeny:
    ingress: false
  order: -2

# This rule denies all traffic to production from non-production namespaces
- endpointSelector:
    matchLabels:
      io.cilium.k8s.namespace.labels.role: production
  ingressDeny:
    - fromEndpoints:
      - matchExpressions:
        - key: io.cilium.k8s.namespace.labels.role 
          operator: NotIn
          values: ["production"]
  enableDefaultDeny:
    ingress: false
  order: -1 # Lower numbers take precedence over higher numbers

You can also create a high-priority, cluster-wide policy that ensures that all systems in a monitoring namespace can access all other namespaces, taking precedence over all other policies by setting a lower order value.

apiVersion: isovalent.com/v1alpha1
kind: IsovalentClusterwideNetworkPolicy
metadata:
  name: allow-monitoring
spec:
  endpointSelector: {}
  ingress:
    - fromEndpoints:
      - matchExpressions:
        - key: io.cilium.k8s.namespace.labels.monitoring 
          operator: Exists
  enableDefaultDeny:
    ingress: false
  order: -1  # Policies have order 0 by default, this overrides them

With support for ordered policies, the Isovalent Network Policy CRD brings another option to apply fine-grained control to Cilium’s security model. It empowers platform teams to define global defaults, enforce strict multi-tenant isolation, and apply exceptions where needed. Whether you’re replatforming from Calico or building zero-trust environments from the ground up, policy ordering adds an essential layer of predictability and precision to network policy enforcement.

Calico Network Policy Converter Tool

One of the key challenges when migrating from Calico to Cilium is translating network policies. The syntax, semantics, and enforcement behavior differ significantly between the two platforms, making manual conversion both tedious and error-prone.

To help streamline this process, we’re excited to introduce a new migration tool for Isovalent customers. The tool parses your existing Calico Network Policies (including GlobalNetworkPolicies and NetworkSets), detects potential conflicts, and generates equivalent Cilium Network Policies and Cilium Clusterwide Network Policies.

Here’s a sample run. The tool will ingest YAML Calico Network Policies manifests and generate an equivalent Cilium configuration:

./calico2cilium-netpol convert --input-files calico-globalnetworkpolicies.yaml 
ℹ️  Converting policies targeting Cilium version=1.16.0
ℹ️  Reading network policies and related CRDs file=calico-globalnetworkpolicies.yaml
ℹ️  Successfully read Calico network policies file=calico-globalnetworkpolicies.yaml num=0
ℹ️  Detecting conflicts file=calico-globalnetworkpolicies.yaml
ℹ️  No conflicts detected file=calico-globalnetworkpolicies.yaml
ℹ️  Writing output file file=calico-globalnetworkpolicies-cilium.yaml
ℹ️  Converting Calico network policies to Cilium network policies file=calico-globalnetworkpolicies-cilium.yaml num=0
ℹ️  Successfully wrote Cilium network policy file=calico-globalnetworkpolicies-cilium.yaml
ℹ️  Conversion successful converted=4
ℹ️  Generating conversion report file=conversion-report-20250507120316.json

While the tool is still in beta and may require some manual review and fine-tuning, it’s already proving extremely helpful in reducing friction during migration. Our Customer Success Architects also actively support migrations and can provide expert guidance to ensure a smooth transition.

We look forward to hearing feedback from users as we continue to improve the tool based on real-world usage.

Together, INP and the migration tool make it easier for platform and security teams to bring their existing policy models into a Cilium-powered environment, without sacrificing control or security.

Network Policy Enhancements

This new release brings several powerful enhancements to how network policies are validated, enforced, and optimized: 

HTTP Port Ranges

You can now define port ranges in HTTP-based network policies, enabling more flexible and concise rule definitions. The rule below would allow ports from 80 to 100.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "rule1"
spec:
  description: "Allow HTTP GET /public from env=prod to app=service"
  endpointSelector:
    matchLabels:
      app: service
  ingress:
  - fromEndpoints:
    - matchLabels:
        env: prod
    toPorts:
    - ports:
      - port: "80"
        endPort: 100
        protocol: TCP
      rules:
        http:
        - method: "GET"
          path: "/public"

Major Policy Engine Performance Gains

Thanks to internal optimizations, policy regeneration is significantly faster and more memory-efficient:

  • CIDR deny rule regeneration is up to 10,000x faster than in v1.16.
  • Memory usage reduced by up to 99.5% in specific benchmarks.
  • Heap allocations reduced by over 99%, improving responsiveness and scalability.

Stricter Policy Validation

The Cilium API server now actively rejects invalid policy definitions that were previously silently ignored, helping catch misconfigurations early and improve overall policy hygiene.

Policy Lockdown Mode (Opt-in)

A new security-focused option allows clusters to lock down endpoints (drop all ingress/egress traffic) if the policy map overflows. This provides stronger compliance guarantees in high-security environments.

  • Controlled via the enable-lockdown-endpoint-on-policy-overflow flag.
  • Fully reversible once the overflow condition clears.
  • Can be paired with BPF map pressure metrics to enable proactive alerting and monitoring.

Together, these changes improve Cilium’s policy engine’s security, performance and reliability.

Secure Lightweight Cilium Images

Starting with version 1.17, Isovalent Networking for Kubernetes now defaults to a lightweight Cilium agent image built on a minimal Linux distribution. This is a key step in our long-term initiative to adopt distroless containers and reduce CVEs associated with base images.

The new lightweight image is fully functional and includes everything required to run the Cilium agent. Ubuntu-based images remain available and will be supported through 1.18. Users can easily switch between image bases via Helm values or manifest customization.

This update not only aligns with industry best practices for secure, minimal containers, but also paves the way for future image hardening.

For Red Hat OpenShift environments, we continue to use Red Hat UBI-based images to meet CNI certification requirements, another form of minimal, secure base image. 

New Red Hat OpenShift Operator for Isovalent Networking for Kubernetes

Cilium has supported Red Hat OpenShift since its early versions, and has been consistently one of the first 3rd Party CNI’s to achieve certification for each new OpenShift release. 

With the release of 1.17, we are introducing the Cilium LifeCycle Operator, which provides an improved deployment experience on Red Hat OpenShift. This replaces the current Cilium Operator Manager for Red Hat OpenShift, which will be deprecated in a future release.

The new operator lays the groundwork for advanced configuration validation and enhanced observability in Cilium. It also streamlines the OpenShift experience by automatically configuring the necessary permissions for Cilium features when they’re enabled, eliminating the need for platform owners to set these up separately, as was previously required.
The below brief output shows the new operator resources deployed into the cilium namespace.

$ oc get pods -n cilium
NAME                                   	READY   STATUS	RESTARTS     	AGE
cilium-envoy-27w2r                     	1/1 	Running   0            	28m
....
cilium-operator-5fb7cbfd4f-c95lr       	1/1 	Running   1 (26m ago)  	41m
cilium-qr59l                           	1/1 	Running   0            	41m
...
clife-controller-manager-c6c6c685b-cds5l   1/1 	Running   0            	22m

With this operator update, the CiliumConfig CRD now includes health status reporting, providing clear insights into the state of Cilium features. For example, it can surface status messages related to GatewayAPI support, offering improved visibility directly through the CRD.

$ oc get ciliumconfig ciliumconfig -o json | jq .status
{
  "conditions": [
	{
  	"lastTransitionTime": "2025-05-23T14:34:12Z",
  	"message": "APIs not available: [Gateway API resource definitions are not available. Please install the CRDs if you wish to use the Gateway API. Cert-manager resource definitions are not available. Please install the CRDs if you wish to use Cert-manager to automatically generate TLS certificates.]",
  	"reason": "APIMissing",
  	"status": "True",
  	"type": "APINotAvailable"
	},
	{
  	"lastTransitionTime": "2025-05-23T15:52:36Z",
  	"message": "success",
  	"reason": "ValuesReadable",
  	"status": "False",
  	"type": "ValuesError"
	},
	{
  	"lastTransitionTime": "2025-05-23T15:52:36Z",
  	"message": "success",
  	"reason": "NoProcessingError",
  	"status": "False",
  	"type": "ProcessingError"
	}
  ]
}

Gateway API: Introducing Parameterized Configurations, Namespaced Policies, and Fine-Grained Route Controls

In the rapidly evolving landscape of Kubernetes networking, Isovalent Networking for Kubernetes continues to lead by embracing and extending the Gateway API. In this release, we’ll focus on three of those enhancements in 1.17:

  1. Parameterized GatewayClass Configurations: Allowing infrastructure teams to define reusable configurations for Gateways, promoting consistency and reducing duplication.
  2. Namespaced Policy Controls: Enabling multi-tenant environments to enforce distinct policies per namespace, ensuring isolation and tailored access controls.
  3. Enhanced HTTPRoute Capabilities: Providing granular control over routing behaviors, facilitating sophisticated traffic management strategies.

Parameterized GatewayClass Configurations

Traditionally, configuring Gateways required embedding specific settings directly within each Gateway resource, leading to redundancy and potential misconfigurations. With the introduction of the parametersRef field in the GatewayClass resource, Cilium enables a more modular approach. This field references a separate CiliumGatewayClassConfig CRD, which encapsulates the desired configuration parameters.

In this example, the GatewayClass named cilium-with-config references a CiliumGatewayClassConfig named cilium-gateway-config in the default namespace.

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: cilium-with-config
spec:
  controllerName: io.cilium/gateway-controller
  parametersRef:
    group: cilium.io
    kind: CiliumGatewayClassConfig
    name: cilium-gateway-config
    namespace: default

The CiliumGatewayClassConfig CRD allows you to specify various service-level configurations that the Cilium controller will apply when provisioning Gateways. For instance, you can define the service type, load balancer settings, and source ranges.

The below example configuration specifies that any Gateway using the associated GatewayClass should be exposed via a LoadBalancer service, restricted to the IP range 10.10.10.10/32.

apiVersion: cilium.io/v2alpha1
kind: CiliumGatewayClassConfig
metadata:
  name: cilium-gateway-config
  namespace: default
spec:
  service:
    type: LoadBalancer
    loadBalancerSourceRanges:
    - 10.10.10.10/32

Namespaced Policy Controls

Isovalent Enterprise for Cilium 1.15 introduced support for enforcing network policies on the Envoy component used by Gateway API. This was implemented via ClusterwideCiliumNetworkPolicy, meaning all Gateway and Ingress resources in the cluster shared the same identity and policy scope.

However, in multi-tenant environments, this model falls short, security requirements often vary between tenants, and policies may need to be applied at the namespace level, including CIDR-based restrictions.

With version 1.17, namespaced policy controls are now available to address these needs. Each Gateway or Ingress can now be treated as a distinct identity, enabling fine-grained CiliumNetworkPolicy enforcement based on metadata such as name, namespace, or labels.

This enhancement empowers platform teams to define per-Gateway access policies, such as allowing specific CIDRs for internal services while exposing others publicly, making it ideal for multi-tenant clusters or environments requiring public/private segmentation.

For example, you can now define two Ingress resources pointing to the same backend (e.g., podinfo) but scoped for different audiences: one for public access and another restricted to internal clients only.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: public
  namespace: default
spec:
  ingressClassName: cilium
  rules:
  - http:
      paths:
      - backend:
          service:
            name: podinfo
            port:
              number: 9898
        path: /
        pathType: Prefix
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: private
  namespace: default
spec:
  ingressClassName: cilium
  rules:
  - http:
      paths:
      - backend:
          service:
            name: podinfo
            port:
              number: 9898
        path: /
        pathType: Prefix

The Cilium agent now automatically assigns a unique identity to each Ingress based on its metadata, enabling fine-grained, selective policy enforcement.

In the example below, we define two CiliumNetworkPolicy (CNP) resources:

  • One that allows traffic to the public Ingress from any client within the host network.
  • Another that restricts access to the private Ingress to a specific client IP (e.g., 172.18.0.4/32).

This approach ensures that each entry point enforces only the access rules relevant to its intended audience.

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-cidr-for-public
  namespace: default
spec:
  endpointSelector:
    matchLabels:
      "ingress:name": cilium-ingress-default-public
  ingress:
  - fromCIDRSet:
    - cidr: 172.18.0.1/32
---
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-cidr-for-private
  namespace: default
spec:
  endpointSelector:
    matchLabels:
      "ingress:name": cilium-ingress-default-private
  ingress:
  - fromCIDRSet:
    - cidr: 172.18.0.4/32

When traffic hits the LoadBalancer IP of each Ingress, only requests originating from the allowed CIDRs are accepted. All others are denied at the network layer before reaching the application.

This feature brings:

  • Multi-tenant clusters to isolate access per team or workload.
  • Staging vs. production ingress separation.
  • Precise control over which clients can access application services, and through which Ingress.

Namespaced Policy Controls bring Ingress-aware network policy enforcement to Cilium, simplifying complex access control scenarios without the need for brittle application-layer logic.

Enhanced HTTPRoute Capabilities

While Namespaced Policy Controls provide isolation at the Gateway or Ingress level, many modern applications expose multiple paths under a single HTTPRoute, and not all of them require the same level of access. In 1.17, the new IsovalentHTTPRouteFilter CRD introduces fine-grained access controls at the route rule level, using Gateway API’s ExtensionRef mechanism.

This feature allows you to attach policies directly to specific routes within an HTTPRoute, enforcing access control based on source IP, hostname, and URL path. For example, you can permit internal access to /version but block access to / for external clients, all within a single route.

In this example, we define a HTTPRoute, and a corresponding IsovalentHTTPRouteFilter that allows access only under tightly scoped conditions.

With this configuration:

  • Requests from the 172.18.0.0/16 CIDR are only allowed if the Host header matches example.com and the path starts with /version.
  • Requests from 10.244.1.8/32 must use internal.com and access /.

All other traffic is denied by default, providing strong path-based access controls that are enforced before the request reaches the application.

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-app-1
spec:
  parentRefs:
  - name: my-gateway
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: podinfo
      port: 9898
    filters:
    - type: ExtensionRef
      extensionRef:
        group: isovalent.com
        kind: IsovalentHTTPRouteFilter
        name: http-filter-config

The IsovalentHTTPRouteFilter CRD defines the allowlist logic:

apiVersion: isovalent.com/v1alpha1
kind: IsovalentHTTPRouteFilter
metadata:
  name: http-filter-config
  namespace: default
spec:
  type: RequestFiltering
  requestFiltering:
    ruleType: Allow
    rules:
    - sourceCIDR:
        cidr: 172.18.0.0/16
      hostName:
        exact: example.com
      path:
        prefix: /version
    - sourceCIDR:
        cidr: 10.244.1.8/32
      hostName:
        exact: internal.com
      path:
        prefix: /

Hubble Timescape + Timescape UI

Isovalent Networking for Kubernetes 1.17 takes a major step forward in simplifying observability with an enhanced Hubble Timescape experience. Timescape, our time-series observability platform, is now available in a new integrated mode, making it easier than ever to get started.

Timescape Integrated Mode: Zero-Friction Deployment for Instant Insight

In this release, Hubble Timescape can now be deployed directly via the Cilium Helm chart, eliminating the need for a separate Hubble Relay installation. This integrated mode is ideal for teams seeking fast, low-effort access to historical observability data without having to manage additional infrastructure.

It offers:

  • A drop-in replacement for Hubble Relay, using the same API and supporting tools like Hubble CLI
  • Built-in Role-Based Access Control
  • Seamless upgrades alongside Cilium
  • No object storage requirements
  • Real-time and short-term historical view via the Timescape Push API

This is the simplest way to get started with Timescape, making it perfect for single-cluster environments, smaller teams, or lightweight production scenarios.

To enable this mode, set the following Helm values during your Isovalent Networking for Kubernetes installation:

enterprise:
  featureGate:
    approved:
      - HubbleTimescape
hubble:
  enabled: true
  tls:
    enabled: true
  timescape:
    enabled: true

Integrated Timescape is well-suited for small to medium-sized clusters and supports up to 2,000 flows per second with short-term time-based retention: 1 hour by default (in-memory), or 6 hours when a persistent volume claim (PVC) is configured.

For environments where flow volume exceeds 2,000 flows/s or long-term historical storage is required, we recommend deploying Timescape in standalone mode, with a dedicated ClickHouse database to ensure performance and scalability.

Push API: Real-Time Visibility, No Object Storage Needed

Introduced in 1.16 as part of Timescape Lite, the new Timescape Push API offers a powerful way to stream observability events directly to Timescape in real time, persistence to object storage for resilience. Think of it as the natural successor to Hubble Relay, offering the same live-streaming capabilities, but with the added benefit of short-term historical context.

With the Push API, you can:

  • Stream events directly to Timescape for real-time observability
  • Optionally back events to object storage for durability
  • Run it standalone or alongside storage, depending on your requirements

The Push API feature is a key enabler for replacing Hubble Relay in Isovalent Networking for Kubernetes, and it dramatically simplifies getting started with Timescape.

Introducing the New Timescape UI

This redesigned interface is a replacement of the Hubble Enterprise UI, offering a more intuitive, performant, and integrated observability experience. Hubble Enterprise UI will be deprecated in a future version. 

Highlights include:

  • A fresh visual design with improved usability
  • Redesigned network policy view
  • Simple deployment as part of Timescape helm chart, enabled by default
  • Future features will be implemented for this component, with Hubble Enterprise UI being deprecated in a future release of Isovalent Networking for Kubernetes.

The below screenshots show the new Hubble Timescape UI, showing a familiar layout, and includes the new redesigned policy view.

Feature Hardening & Feature Gates

As part of the 1.16 release, we introduced feature gates to give our customers greater control over how new capabilities are rolled out in their environments. All features with a Stable maturity status can be enabled by default. However, starting in 1.17, features in Limited or Beta will require customers to contact Isovalent support to activate them. 

This gated approach allows customers to work closely with our Customer Success team to test and evaluate less mature features; helping build confidence prior to deploying them in production.

We’ve invested significant efforts validating feature stability and in the 1.17 release, we’re excited to announce the following promotions:

Promoted from Beta to Limited:

Promoted from Limited to Stable:

Feature Status

Here is a reminder of the feature maturity levels:

  • Stable: A feature that is appropriate for production use in a variety of supported configurations due to significant hardening from testing and use.
  • Limited: A feature that is appropriate for production use only in specific scenarios and in close consultation with the Isovalent team.
  • Beta: A feature that is not appropriate for production use, but where user testing and feedback is requested. Customers should contact Isovalent support before considering Beta features.

Learn More

To learn more about Isovalent Enterprise Networking for Kubernetes, sign up for the webinar:

What’s New in Isovalent Networking for Kubernetes 1.17 Webinar

Get a front-row seat to the latest enterprise-grade Cilium release. Join us to explore how Isovalent Networking for Kubernetes 1.17 helps platform teams harden security, improve scalability, and streamline operations in production Kubernetes clusters.

Register for the webinar!

You can also browse through our resource library, for eBooks, guides, tutorials and interactive online labs:

Resource Library

Our resource library include multiple eBooks, white papers, case studies, online labs, videos and much more!

Browse Resource Library

Contact Us

Curious how these new features can elevate your Kubernetes networking and security? Request a demo with an Isovalent Solutions Architect to see Cilium 1.17 in action and explore what’s possible for your environment.

Dean Lewis
AuthorDean LewisSenior Technical Marketing Engineer
Nico Vibert
AuthorNico VibertSenior Staff Technical Marketing Engineer

Related

Blogs

Understanding Kubernetes Network Security: Cilium Network Policy Deep Dive eBook

Explore our comprehensive eBook for an in-depth understanding of Cilium's network policy engine and its advanced capabilities

By
Dean Lewis
Blogs

Why Confluent Trusts Isovalent Enterprise Platform for Kubernetes Multi-Cloud Excellence

Learn how Confluent optimized multi-cloud networking across AWS, Azure, and GCP with Isovalent Networking for Kubernetes.

By
Dean Lewis
Blogs

Top 20 Cilium Use Cases

An overview of 20 common Cilium use cases.

By
Roland Wolters

Industry insights you won’t delete. Delivered to your inbox weekly.