Isovalent & Cisco ACI: Better Together

Cisco’s story started when Stanford University students and staff solved one of the most important challenges in computer networking: connecting disparate networks. The solution – the multi-protocol router – is at the root of the modern internet as we know it.
Decades later, another challenge emerged: networking for cloud-native workloads. In 2015, four engineers working in the Cisco ACI Business Unit began working on a side project to provide high-performance container networking, security and observability. They called it Cilium.
These engineers, alongside other co-founders, went on to establish Isovalent — taking the cloud-native world by storm. Powered by the revolutionary kernel technology eBPF, Cilium was selected by Amazon, Azure and Google as the preferred networking platform for their Kubernetes offering and distribution. Donated to the Cloud Native Computing Foundation in 2021, Cilium is the only Graduated project in the Cloud Native Networking category – a testimony to its stability and its uptake.
With Cisco acquiring Isovalent, it’s come full circle and the Isovalent Enterprise Platform – based on an enterprise-grade hardened version of Cilium – integrates cleanly with the Cisco ACI platform.
While ACI is the policy-driven foundation for the datacenter fabric, Isovalent is the connectivity, observability, and enforcement layer inside the Kubernetes cluster. Together, they solve one of today’s most pressing questions:
How do we connect, secure, and operate dynamic cloud-native workloads — without losing the visibility and control we’ve come to expect from traditional networking?
In this blog post, we will review why Cisco ACI and the Isovalent Enterprise platform work best together. If you’re in a rush, check out the fun bee-inspired infographic which you can download here.
Recommended Pre-Reading
If you’re a network engineer still getting to grips with Kubernetes, don’t worry – we got something for you. This short eBook is approaching 10,000 downloads (thank you to all the readers!) and is a great starting point for network engineers who are still familiarising themselves with the Kubernetes semantics.

What is the Isovalent Enterprise Platform?
The Isovalent Enterprise Platform is a hardened, enterprise-grade distribution of the open-source projects Cilium, Hubble, and Tetragon – all built and supported by the creators of Cilium.
- Cilium enhances Kubernetes networking and security using eBPF.
- Hubble delivers deep network observability and tracing.
- Tetragon adds runtime visibility and enforcement.
For simplicity, when we refer to Cilium, Hubble, or Tetragon throughout this post, we’re referring to their enterprise-grade equivalents within the Isovalent Enterprise Platform.
Before we begin to explore some of the benefits and considerations when using Cisco ACI with the Isovalent Platform together, let’s tackle a question a few of you may have.
Flexible Fabric Choices: ACI and EVPN VXLAN
We recognize that networking teams have diverse preferences. While this blog focuses on Cisco ACI, the benefits described — especially around BGP integration, observability, and Kubernetes-native security — also apply to Nexus 9000 deployments in NX-OS mode, particularly when using VXLAN/EVPN.
From an Isovalent perspective, both ACI and NX-OS VXLAN fabrics are fully supported. Our customer success team works with each customer to provide design guidance tailored to their preferred network architecture.
Topology Overview
At a high level, you can think of each Kubernetes cluster as a spoke, and the ACI fabric as the hub. The goal is to ensure consistent, secure, and performant connectivity between the clusters and the rest of the network.
For those less familiar with Kubernetes networking, here’s a quick recap:
- A Kubernetes cluster consists of a control plane and a set of worker nodes.
- Pods, the smallest deployable unit in Kubernetes, run on these nodes.
- Each pod is assigned a unique IP address from a range called the PodCIDR.
- Because pods are ephemeral, Kubernetes introduces Services as a stable abstraction — assigning deterministic IPs and DNS entries to groups of pods.
To make PodCIDRs and Service IPs routable from outside the cluster, the Isovalent Platform uses BGP to advertise them to ACI leaf switches via L3Out — which we’ll explore in more detail shortly.

What Will Network Engineers Expect?
Overall, what network engineers will expect are the following characteristics:
- Optimal connectivity – ensuring access from/to pods and service via the ACI fabric is resilient and high-performant
- End-to-end security – providing micro-segmentation cluster-wide & fabric-wide and robust security controls to ensure the platform can cater to multi-tenant environments
- Observability and operations – empowering engineers to monitor applications health during Day 2 operations
Let’s review each area one by one, beginning with networking.

Flexible Cluster Connectivity
Connecting a Kubernetes cluster to a traditional network fabric isn’t always straightforward. Pods and services use dynamically assigned IPs, often from ranges that the rest of the network isn’t aware of. For Cisco ACI users, it can introduce challenges around routing, IP visibility, and egress design. We will explain how the Isovalent Enterprise Platform addresses these challenges.
At the heart of the connectivity are two key components:
- ACI L3Out: defines how traffic enters and exits the ACI fabric.
- Cilium’s BGP support: advertises pod and service IPs to the fabric, ensuring reachability and deterministic routing.
The best way to explain this concept is by going through the lab we used for the Isovalent and ACI webinar 3-part series.
Lab Example
In our lab, the Kubernetes worker nodes sit in a tenant VRF called Isovalent
, with an L3Out connecting to a BGP peer. The default gateway (10.237.101.4
) is a floating SVI, allowing the gateway to be available across multiple leaf switches — improving resilience and simplifying configuration.
We also connected a VM to a separate bridge domain to demonstrate full bidirectional communication between Kubernetes pods and traditional VMs — all within the same ACI fabric.

This setup enables:
- Predictable north-south routing for services exposed via Cilium
- Seamless pod-to-VM communication using ACI’s native forwarding and Cilium’s built-in BGP
- Automated route advertisement using dynamic BGP and tenant-specific VRFs
To simplify gateway configuration for our Kubernetes worker nodes, we use a Floating SVI L3Out. Unlike standard SVIs, which require per-leaf configuration, a Floating SVI allows the gateway to be available across multiple leaf switches — providing both failover and operational simplicity.
Check the appendix a brief explainer on L3Out.
Versatile Networking
Once your Kubernetes cluster is connected to the ACI fabric, the next key decision is how traffic should flow between pods inside the cluster — and how that traffic interacts with the underlying network.
The Isovalent Enterprise Platform supports two primary networking models with Cilium:
- Overlay routing (VXLAN or GENEVE)
- Native routing (BGP or auto-route injection)
Each has trade-offs, and both are fully compatible with Cisco ACI.
Overlay Mode
In overlay mode, Cilium establishes tunnels (e.g., VXLAN) between nodes. This approach:
- Requires no fabric awareness of PodCIDRs
- Keeps cluster config simple
- Comes with a small performance penalty (~8%) due to encapsulation
Overlay mode is the default option as they make your life easier but they come with the cost of an additional encapsulation header and the processing that comes with it.

Native Routing Mode
For high-performance or latency-sensitive applications, native routing mode is preferred. Here, PodCIDRs are advertised via BGP, and packets are routed directly without tunneling.
Cilium includes built-in BGP support, allowing:
- Each node to advertise its PodCIDR(s) to ACI leaf switches
- ACI to treat pod traffic like any other routed workload
- Consistent reachability for external and intra-cluster traffic
For optimal network performance, the “native routing” mode is preferred.

Choosing the Right Model
Routing Type | Pros | Considerations |
---|---|---|
Overlay (VXLAN/GENEVE) | Simple setup, no BGP required, scalable | Slight performance hit due to encapsulation |
Native (BGP) | Highest performance, no overlays, low latency | Requires BGP config in both Cilium and ACI |
Native (Auto Route Injection) | Simpler than BGP, fast routing without encapsulation. Adequate for PoC/small scale. | Injects static routes into node routing tables and requires all nodes to be on the same network. |
In ACI environments, native BGP routing is the most ideal choice — but overlay is a valid option for lower-complexity or non-production clusters and native with auto-route injection is fine for PoC and small scale environments.
Built-in L7 Load Balancer
Once your Kubernetes workloads are connected and routing internally, the next challenge is how to expose services to users or systems outside the cluster. Whether it’s for internal teams, external APIs, or cross-cluster access, service exposure must be reliable, secure, and observable.
Cilium provides built-in support for:
- NodePort
- LoadBalancer Services
- Ingress
- Gateway API
You can learn more about Kubernetes Services in the Kubernetes Service documentation and in the Kubernetes eBook for network engineer, from which the following diagrams are taken:
LoadBalancer Services IP Address Management
With Cilium, LoadBalancer services can be assigned IPs from a custom, cluster-scoped pool using CiliumLoadBalancerIPPool
. These IPs are then advertised to ACI via BGP, making them:
- Fabric-routable
- Namespace-aware
- Deterministic and predictable
For example, you can define an IP address pool such as 10.237.101.32/27
and assign it to Services with a particular tenant (using labels) or namespace.
When created, Services will be assigned IP addresses from the 10.237.101.32/27
pool as long as they match the selector criteria.
Ingress and Gateway API Support
Cilium also includes native support for both Kubernetes Ingress and the newer Gateway API. These provide:
- Layer 7-aware routing, including path- and host-based matching
- TLS termination
- Multi-tenancy support, isolating traffic by namespace or label
This removes the need to deploy separate ingress controllers (e.g. NGINX or Envoy), reducing operational overhead and standardizing on one datapath across L3–L7.
Operational Value
For platform and network teams, this approach offers:
- Simplified IP management — no external cloud LB such as MetalLB
- Seamless integration — IPs and routes are visible to ACI via BGP
- Tenant-aware service exposure — namespace mapping aligns with ACI ESGs or contracts
This means applications running on Kubernetes can be exposed just like any other ACI-connected workload — no special-case routing, no extra load balancers, and no compromise on performance or control.
Built-in BGP
For platform teams deploying Kubernetes in ACI environments, routing is critical. Pod and service IPs need to be visible to the network, and services must be reachable — not only from inside the cluster, but also from connected VMs, users, or systems outside of it.
The Isovalent Enterprise Platform solves this by integrating native BGP support directly into Cilium — enabling Kubernetes nodes to:
- Advertise PodCIDRs
- Announce Service IPs
- Peer directly with ACI leaf switches via L3Out
This eliminates the need for external BGP daemons (like FRR) and allows the Kubernetes cluster to integrate into the network like any other routed domain.
A Familiar Experience for Network Engineers
Once BGP peering is established between Kubernetes nodes and ACI, standard tooling and commands can be used to verify routing state — just as you would on any traditional infrastructure.
Cilium comes with a CLI, which provides a similar experience to IOS CLI users. You can see for example which prefixes are advertised from Cilium to the ACI fabric:
Of course, you can verify on ACI that the BGP sessions are established and that we are indeed receiving multiple prefixes:
Since every organization follows unique BGP practices tailored to their specific requirements, our recommendations may not cover every scenario, but at a minimum, you should consider these best practices:
- Use MD5 authentication between BGP peers
- Enable route-maps or prefix filters on the ACI side to restrict accepted prefixes
- Adjust timers or enable BFD for faster failure detection
- Use ACI BGP Dynamic Neighbors to simplify configuration in larger clusters
Reach out to our Solutions Architect team to discuss optimal design strategies for your own environment.
High Performance for AI Workloads
AI workloads place demanding requirements on network infrastructure: extremely high throughput, low latency, and uncompromising reliability. These workloads often span thousands of GPUs across multiple clusters, with long-running training jobs and massive data transfers that leave little room for packet loss or inconsistency.
While Cisco ACI delivers high underlay performance, the container networking layer has traditionally been a bottleneck. In many environments, Kubernetes default networking introduces a 30–35% drop in throughput compared to host networking.
The Isovalent Enterprise Platform is built to remove that bottleneck — bringing host-level performance to containerized AI workloads through a modern, kernel-native datapath.
The table below summarizes how Cilium helps meet the performance demands of modern AI platforms:
Challenge | Solution |
---|---|
High I/O and low latency requirements | Cilium uses eBPF and supports XDP (eXpress Data Path) for high-performance packet processing, delivering performance close to DPDK. |
Large-scale, multi-node cluster deployments | Cilium eliminates kube-proxy and scales efficiently across thousands of nodes — proven in real-world, large-scale AI use. |
Bridging the host vs container performance gap | Optimized container networking datapath reduces overhead and narrows the performance gap with native host networking. |
Handling large data transfers efficiently | Support for BIG TCP enables larger packets and significantly improves throughput for data-heavy workloads. |
Reducing kernel overhead and latency | Cilium replaces iptables with eBPF-native datapaths, minimizing latency and maximizing CPU efficiency. |
With these capabilities, the Isovalent Enterprise Platform ensures that network performance is no longer a limiting factor for high-scale, GPU-intensive workloads — seamlessly complementing the speed and reliability of Cisco ACI underneath.
If you’d like to learn more about how Isovalent can secure and run AI workloads, click on the images below.

An Easier Path to Zero Trust
While establishing connectivity to and from our applications running in Kubernetes was essential, it’s primordial we protect our assets and apply security controls.
As organizations adopt Zero Trust principles to reduce risk and contain lateral movement, Kubernetes presents new challenges: workloads are ephemeral, services are abstracted, and traditional IP-based controls are often insufficient on their own.
Cisco ACI provides strong segmentation and security at the infrastructure level. The Isovalent Enterprise Platform complements this by bringing fine-grained, identity-aware security inside the Kubernetes cluster — using Cilium’s powerful network policy engine.
Identity-Aware Network Policies with Cilium
Cilium extends Kubernetes NetworkPolicies with advanced capabilities that make Zero Trust implementation both more granular and more practical:
- Layer 7-aware rules: Enforce restrictions based on HTTP methods, paths, or headers (e.g.,
GET /metrics
) - DNS-based policies: Control egress access based on domain names rather than hardcoded IPs
- Label-aware enforcement: Use Kubernetes labels and namespaces as policy selectors — ideal for multi-tenant or service-aligned segmentation
Observability-Driven Policy Adoption
To make Zero Trust adoption more accessible, the Isovalent Enterprise Platform includes built-in tooling to observe live traffic and suggest appropriate policies. This recommendation engine helps teams:
- Gradually introduce policies based on actual traffic flows
- Reduce the risk of unintended disruption
- Gain confidence in policy coverage and enforcement
With the Hubble UI, engineers can inspect policy decisions, visualize allowed and denied flows, and understand traffic behavior across the cluster — bringing transparency to a traditionally opaque layer.
For example, a HTTPS query to isovalent.com
with curl
is instantly logged within the Hubble UI and generates the Cilium Network Policy required to authorize this specific flow:

The Isovalent Platform also includes a Network Policy Change Tracker, enabling you to inspect changes to network policies and rollback when needed.
Cilium doesn’t replace ACI’s security model — it complements it. While ACI secures the fabric, Cilium secures the workloads running on top. Together, they provide a consistent foundation for segmentation, observability, and control across both traditional and cloud-native environments.
To learn more about Network Policies — and how to apply them in real-world environments — check out our two complimentary network policy eBooks:
Multi-Tenancy Security
While Kubernetes was designed for multi-tenancy, as soon as traffic leaves the cluster, identity is lost. By default, pod traffic leaving a cluster is often masqueraded (or, as we like to call it, source NATed) with the node’s IP, making it difficult for upstream systems like firewalls to distinguish between tenants.
This creates challenges for visibility, auditing, and policy enforcement. To address this, the Isovalent Enterprise Platform provides Egress Gateway support, a feature that ensures traffic from a given pod or namespace exits the cluster through a designated gateway node — using a predictable, tenant-specific IP address.

As shown in the diagram above, traffic from pods is steered through gateway nodes, where Cilium applies eBPF logic to enforce consistent source IPs per namespace. This enables external firewalls or legacy applications to allow or deny traffic based on static IP-based rules — preserving security posture and simplifying integration.
In the ACI context, these egress IPs can be grouped into Endpoint Security Groups (ESGs), as illustrated in the diagram below.

Let’s summarize the benefits:
- Deterministic Egress IPs: Fixed IPs per namespace or service simplify external policy enforcement.
- Tenant-Aware Fabric Policies: Integrate Kubernetes namespaces into ACI’s ESG model for consistent segmentation.
- Improved Visibility: Maintain clear attribution for outbound flows — across audit logs, firewall rules, and SIEM tools.
With Egress Gateway and ESG classification, the Isovalent Platform bridges Kubernetes multi-tenancy with ACI’s fabric-level enforcement — extending security, identity, and control beyond the cluster.
High-Speed Transparent Encryption
Encrypting traffic between workloads is becoming a default expectation — whether to meet compliance requirements, protect sensitive data, or align with Zero Trust principles. But enabling encryption in Kubernetes environments can introduce performance and operational trade-offs if not done carefully.
The Isovalent Enterprise Platform provides transparent encryption for pod-to-pod and cluster-to-cluster communication, using two kernel-level options:
- WireGuard — A high-performance, low-overhead option using modern cryptography with minimal configuration.
- IPsec — A flexible alternative allowing choice of ciphers and full control over key management.

Both options enable encryption without the need for sidecars, user-space proxies, or changes to the application — helping teams achieve strong security without introducing architectural complexity.
In ACI-based environments, Cilium’s encryption is especially valuable when:
- MACsec or CloudSec are not in use
- Traffic spans across clusters, data centers, or hybrid cloud environments
- Multi-tenancy requires secure separation between workloads sharing the same network fabric
You can leverage Cilium’s encryption to ensure data confidentiality and integrity across complex topologies — extending ACI’s segmentation and security guarantees into the Kubernetes layer.
The final core component is around Day 2 operations. How can we troubleshoot poor application performance? How can we visualize packets being sent over the network? And who can we turn to if we run into any issues?

Rich Network Observability
For network and security teams, Kubernetes can feel like a black box. Pods come and go. IPs are ephemeral. Services abstract away real endpoints. The result is a loss of clarity around “what’s talking to what” — and why it’s failing.
The Isovalent Enterprise Platform solves this by restoring visibility inside the Kubernetes cluster. Powered by eBPF, both Hubble and Tetragon deliver deep, context-rich network observability.
Hubble: Network Flow Visibility with Kubernetes Context
Hubble provides detailed visibility into all network flows within the cluster — from DNS lookups and HTTP requests to TCP connections and ICMP traffic. Each flow is enriched with Kubernetes metadata, such as the originating pod, namespace, and any network policies applied.
This enables teams to:
- Understand why traffic is being dropped or delayed
- Visualize communication patterns across clusters and tenants
- Monitor network health in real time, including latency and error rates
The data is available through the intuitive Hubble UI, offering flow logs, service maps, and live observability without the operational complexity of sidecars or packet sniffers.

Tetragon: Observability at the Application Layer
Tetragon extends visibility further by observing application-level activity. It records process executions, socket activity, and security-relevant events, providing additional context for troubleshooting and forensics.
For example, Tetragon can show:
- Which container initiated an outbound connection
- What binary and command-line arguments were used
- Where DNS failures or TCP resets are occurring across workloads
Together, Hubble and Tetragon provide a unified view across the network and application layers — helping platform and operations teams maintain visibility, security, and control across both the Kubernetes environment and the ACI fabric it runs on.
For forensics purposes, data collected by Tetragon and Hubble can then be exported to SIEM, Log Aggregation and other monitoring platforms.

For example, you can visualize DNS queries and HTTP golden signals in Grafana dashboards (see the screenshots below), or integration with Cisco’s own ecosystem via Splunk.
Even Better with Splunk
Security and observability teams need deep, real-time context to diagnose issues and respond to threats quickly. By combining Tetragon’s eBPF-powered telemetry with Splunk’s analytics engine, teams gain full visibility into what’s happening inside their Kubernetes clusters — right down to the process level.
Tetragon captures rich metadata for every event, including:
- The node, pod, and namespace where traffic originated
- The container, process, binary, and arguments involved
- The full five-tuple (source/destination IPs and ports, and protocol) of the traffic flow
This level of detail is invaluable — not only for application performance troubleshooting, but also for forensic investigations. For example, when analyzing an Apache Tomcat CVE exploit attempt, the joint solution makes it easy to reconstruct the full kill chain: which pod initiated the connection, which binary was executed, and what command-line arguments were passed.

For more detail, check out Jeremy’s excellent blog post and subsequent webinar which walk through real-world attack scenarios visualized in Splunk using Tetragon data.
24/7 Enterprise Support
Cilium’s open source ecosystem is vibrant and diverse, with close to 1,000 contributors and over 20,000 GitHub Stars and members on Slack. But for organizations running production workloads, relying solely on best-effort community support isn’t always enough. With Isovalent, you get enterprise-grade 24/7 support, backed by the engineers who build and maintain Cilium. Amongst all the benefits Isovalent customers get from our support is that we maintain a mirrored environment configuration that mimicks our customers’ production set-up. This means bugs can be fixed faster and code fixes and patches can be tested on actual customer environments prior to production roll-out.
Isovalent customers have access to a dedicated solutions architect—a trusted advisor who works closely with your team to ensure successful adoption and a faster route to production. This includes tailored technical training, onboarding support, architecture reviews, and guidance on best practices. Our staff of talented architects includes authors of O’Reilly books, such as “Networking and Kubernetes” and the upcoming “Cilium Up and Running” and CCIE-certified engineers, with deep experience across both Cisco infrastructure and Kubernetes platforms—bringing a rare blend of cloud-native and enterprise networking expertise to your team.
If you’d like to learn more what our customers are achieving by partnering with Isovalent, read the case studies below:
Final Recap: Why Cisco ACI + Isovalent Enterprise Platform?
Capability | Benefit |
---|---|
Seamless Cluster Connectivity | Native routing or overlay modes to connect Kubernetes to ACI fabric |
Built-in BGP & IPAM | Advertise pod and service IPs to the fabric with minimal config |
Floating L3Out + Egress Gateway | Resilient, predictable external connectivity with IP identity preservation |
Enterprise-Grade Security | Multi-tenant aware policy, L7 firewalling, DNS security, and encryption |
Unified Observability | Full-stack insights via Hubble and Tetragon, exportable to Splunk |
24/7 Support & Expert Guidance | SLA-backed support + Dedicated Solutions Architect with Cisco/K8s expertise |
Contact Us!
We hope this blog gave you a clear picture of how Cisco ACI and the Isovalent Enterprise Platform work better together — combining powerful fabric networking with deep Kubernetes visibility, control, and performance.
Want to see it in action?
- Get in touch to book a tailored demo with our team
- Watch the 3-part webinar series,
- Explore one of our free online labs

Appendix
ACI Terminology Reference
If you’re new to Cisco ACI, the table below maps key ACI constructs to more familiar concepts from traditional networking. This can be helpful when interpreting how Kubernetes clusters integrate with ACI.
ACI Construct | Traditional Networking Construct | Description |
Tenant | Business Unit | Highest-level object in ACI. Encapsulates VRFs, BDs, EPGs, Contracts, etc. |
Virtual Routing and Forwarding (VRF) | Virtual Routing and Forwarding (VRF) | Identical concept in in ACI and traditional networking – enables multiple isolated Layer 3 routing domains over a shared physical network. |
Bridge Domain (BD) | VLAN (plus optional SVI) | Layer 2 broadcast domain in ACI. Can include a gateway for attached subnets (SVI). |
Endpoint Group (EPG) | VLAN (logical group) | Logical grouping of endpoints for policy purposes. |
Endpoint Security Group (ESG) | Dynamic Security Groups (no real traditional equivalent) | Identity-based grouping of endpoints decoupled from network topology. |
Layer 3 Out (L3Out) | Router External Interface + Routing Configuration (BGP/OSPF/Static) | Connects ACI to external Layer 3 networks. Includes routing protocols, interface configuration, and policies for outside connectivity. |
Floating ACI L3Out | N/A | ACI-specific feature. Allows L3Outs to “float” across multiple leaf switches without binding to specific ports, enabling fast failover and load balancing. |
Contracts | ACLs + Stateful Firewall Rules | ACI policy framework for controlling traffic between EPGs or ESGs. |
ACI L3 Out
Cisco ACI L3Out is a standard ACI component for connecting the ACI fabric to external networks, such as routers connecting to the wide area network, firewalls for security, load balancers for providing highly available services housed on virtual machines or bare metal servers, and in this case, connecting to the Kubernetes cluster with Cilium. The L3Out can be enabled with dynamic routing protocols such as BGP or with static routes. In this topology, we use BGP to provide routing and ingress capabilities for the modern workloads and services running within the Kubernetes cluster with Cilium.
There are two default gateway options available within ACI:
- Standard SVIs – initial model that required the SVI to be configured on all SVI switches when connecting to an external router
- Floating SVIs – newer model, enabling users to configure L3Out without specifying any L3Out interface on the local leaf.
BGP Support
Both ACI and the Isovalent Enterprise Platform support a wealth of BGP features:
Feature | ACI | Isovalent |
Custom Timers | x | x |
Graceful Restart | x | x |
MD5 Authentication | x | x |
Route Summarization | x | x |
Route-Map / Prefix-Filtering | x | |
eBGP Multihop | x | x |
Community | x | x |
IPv6 | x | x |
Allow As In | x | |
BFD | x | x |
BFD (C-bit support) | x |

Prior to joining Isovalent, Nico worked in many different roles—operations and support, design and architecture, and technical pre-sales—at companies such as HashiCorp, VMware, and Cisco.
In his current role, Nico focuses primarily on creating content to make networking a more approachable field and regularly speaks at events like KubeCon, VMworld, and Cisco Live.

Garry Richardson is the EMEA Head of Solutions Engineering at Isovalent, the originators of the Cilium project. In this role, he is supporting and enabling customers and partners to be successful with Cilium in their mission-critical environments. Garry has over a decade of experience in traditional networking and security, software defined networking and network automation, he is passionate about helping customers adopt Cilium at scale based on business requirements and future needs. When not working, Garry enjoys mountain biking, playing snooker and socialising with friends.