WireGuard Node-To-Node Encryption on Cilium
[10:44] In this video, learn about a new feature: Cilium Transparent Encryption with WireGuard can now encrypt traffic node-to-node!

[10:44] In this video, learn about a new feature: Cilium Transparent Encryption with WireGuard can now encrypt traffic node-to-node!
[06:39] In this video, Senior Technical Marketing Engineer Nico Vibert walks you through how Cilium Gateway API can route HTTPS traffic into your cluster.
Learn how VSHN provides services for mission-critical applications reduced their support burden with Isovalent's Enterprise Edition of Cilium
[05:22] In this short video, Senior Technical Marketing Engineer Nico Vibert walks you through how to use Cilium Gateway API to modify HTTP headers.
[11:37] With Cilium 1.13 comes a new exciting feature that enables faster performance and lower latency through the network stack: BIG TCP.
Announcing Cilium 1.13 - Gateway API, mTLS datapath, Service Mesh, BIG TCP, SBOM, SNI NetworkPolicy - and many more features!
With Cilium, you can now leverage BIG TCP with IPv4 or IPv6 to improve performance through the Linux network stack.
Learn how simple IPv6 can be installed and operated with Cilium and Hubble. With Kubernetes’ IPv6 support improving in recent releases and Dual Stack Generally Available in Kubernetes 1.23, it’s time to learn about IPv6 on Kubernetes. You might be wondering “How on Earth am I going to be able to operate this?” Good news – you’re in the right place. This lab will walk you through how to deploy a IPv4/IPv6 Dual Stack Kubernetes cluster and install Cilium and Hubble to benefit from their networking and observability capabilities. In particular, visibility of IPv6 flows is absolutely essential. IPv6’s slow adoption is primarily caused by fears it would be hard to operate and manage. As you will see, a tool such as Hubble will help operators visualize and understand their IPv6 network better.
[03:23] In this brief demo, we introduce a new Cilium 1.13 feature: support for Kubernetes Gateway API !
[02:26] In this brief demo, we test a new tool called Ingress2Gateway that lets you convert Kubernetes Ingress resources to Gateway API resources.
Kubernetes does not natively support gRPC Load Balancing out of the box. Learn how to use Cilium’s embedded Envoy proxy to achieve load-balancing for L7 services, with a simple annotation.
[01:01] In Cilium 1.13, you can now use Cilium’s embedded Envoy proxy to achieve load-balancing for L7 services, with a simple annotation.
Creating the right Network Policies can be difficult. In this lab, you will use Hubble metrics to build a Network Policy Verdict dashboard in Grafana showing which flows need to be allowed in your policy approach.
[01:13] In Cilium 1.13, Ingress Resource can now share Kubernetes LoadBalancer Resources. Watch the mini demo to learn more!
[01:30] Cilium 1.13 comes with a fully integrated with a HTTP traffic splitting engine!
[01:47] In this mini-demo, you will learn about internalTrafficPolicy support on Cilium! This feature was added with Cilium 1.13.
[01:41] In this mini-demo, you will get an insight into Load-Balancer IP Address Management support on Cilium! This feature was added with Cilium 1.13.
[01:21] In this mini-demo, you will get an insight into SCTP support on Cilium! This feature was added with Cilium 1.13.
[01:09] What’s new in Cilium 1.13 is the ability to use Cilium to advertise not just the Pod IP range but Kubernetes Service IPs.
SCTP (Stream Control Transmission Protocol) is a transport-layer protocol used for communication between applications. It is similar to TCP, but it provides additional features such as multi-homing and message fragmentation. Applications that require reliable, ordered delivery of data, but also need the ability to handle multiple streams of data simultaneously can use SCTP. SCTP is primarily used by service providers and mobile operators. While SCTP support for Kubernetes Services, Endpoint and NetworkPolicy was introduced in Kubernetes 1.12, you still need a CNI to support it. Good news: basic support for SCTP was introduced in Cilium 1.13!
BGP support was initially introduced in Cilium 1.10 and subsequent improvements have been made since, such as the recent introduction of IPv6 support in Cilium 1.12. In Cilium 1.13, that support was enhanced with the introduction of Load Balancer IPAM and BGP Service address advertisements. In this lab, you will learn about both these new features and how they can simplify your network connectivity operations.
Ever wonder how to install a specific version of Cilium? Or whether to use Helm or the cilium-cli? Let's look at the many ways to install Cilium.
We now have badges for Isovalent certified Cilium hands-on labs. Collect all four of them over the holidays.
In this 3-part webinar series, Isovalent developers tell the story of how and why eBPF was created, how eBPF works and how Cilium was born.
[39:52] The final part of the How the Hive Came to Bee series is presented by Joe Stringer (Cilium maintainer).
[60:56] Join us for the second session of our eBPF Creators webinar series to learn how eBPF works at the kernel level. You will learn how eBPF functions under the hood, discuss the internal workings, and see “how things are actually done” with eBPF.
[52:11] Tune in to the first session of our eBPF Creators' webinar series to hear how eBPF was started, and what challenges that can be solved with eBPF that was impossible before. In this session you will learn the impact of eBPF and how it is fundamentally changing networking, tracing, and security.
Capital One needed to scale their PaaS to multiple teams - but required secure network isolation, visibility and minimal performance overhead. Isovalent Cilium Enterprise met all requirements and scaled past the iptables limits. Hubble’s additional observability capabilities helped their teams to do more from the start.
Isovalent helped PostFinance to build a scalable Kubernetes platform to run mission-critical banking software in production. By migrating to Cilium as the default CNI for kubernetes, they were able to solve their challenges regarding scale, observability and latency. The network was made visible, improving troubleshooting, enabling forensic analysis and transparently encrypt network traffic.
Microsoft and Isovalent enter a strategic partnership to bring eBPF-based Cilium and Tetragon to Azure and AKS.
[05:40] In this demo, Isovalent Staff Software Engineer Louis DeLosSantos walks through an introductory demo of SRv6 on Cilium, for a L3VPN use case. The demo was first shown live during eBPF Day North America 2022.
Ever since its inception, Cilium has supported Kubernetes Network Policies to enforce traffic control to and from pods at L3/L4. But Cilium Network Policies even go even further: by leveraging eBPF, it can provide greater visibility into packets and enforce traffic policies at L7 and can filter traffic based on criteria such as FQDN, protocol (such as kafka, grpc), etc… Creating and manipulating these Network Policies is done declaratively using YAML manifests. What if we could apply the Kubernetes Network Policy operating model to our hosts? Wouldn’t it be nice to have a consistent security model across not just our pods, but also the hosts running the pods? Let’s look at how the Cilium Host Firewall can achieve this. In this lab, we will install SSH on the nodes of a Kind cluster, then create Cluster-wide Network Policies to regulate how the nodes can be accessed using SSH. The Control Plane node will be used as a bastion to access the other nodes in the cluster.
In this short lab, you will learn about Gateway API, a new Kubernetes standard on how to route traffic into a Kubernetes cluster. The Gateway API is the next generation of the Ingress API. Gateway API addresses some the Ingress limitations by providing an extensible, role-based and generic model to configure advanced L7 traffic routing capabilities into a Kubernetes cluster. In this lab, you will learn how you can use the Cilium Gateway API functionality to route HTTP and HTTPS traffic into your Kubernetes-hosted application.
In this tutorial, you'll learn how easy it is to encrypt Kubernetes traffic using Cilium Transparent Encryption with IPsec and WireGuard.
eBPF is the new standard to program Linux kernel capabilities in a safe and efficient manner without requiring to change kernel source code or loading kernel modules. It has enabled a new generation of high performance tooling to be developed covering networking, security, and observability use cases. The best way to learn about eBPF is to read the book “What is eBPF” by Liz Rice. And the best way to have your first experience with eBPF programming is to walk through this lab, which takes the opensnoop example out of the book and teaches you to handle an eBPF tool, watch it loading its components and even add your own tracing into the source eBPF code.
You already know that Cilium accelerates networking, and provides security and observability in Kubernetes, using the power of eBPF. Now Cilium is bringing those eBPF strengths to the world of Service Mesh. Cilium Service Mesh features eBPF-powered connectivity, traffic management, security and observability. In this lab, you will learn how you can use Cilium to deploy Ingress resources to dynamically configure the Envoy proxy provided with the Cilium agent. And all of the above without any Envoy sidecar injection into your pods!
Cilium is an open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes. At the foundation of Cilium is a new Linux kernel technology called eBPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration. In this track, we provide you a fully fledged Cilium installation on a small cluster, together with a few challenges to solve. See yourself how Cilium works, and how it can help you securing your moon-sized battlestation in a “Star Wars”-inspired challenge.
In this scenario, we are going to show how Isovalent Enterprise for Cilium can provide visibility into TLS traffic. In Security Audits, a company or team has to verify their application protects data in transit and doesn’t leak information during communication, especially when data leaves a sensitive internal network. Mechanisms like TLS ensure that data is encrypted in transit, but verifying that a TLS configuration is secure becomes a challenge for most companies. In this lab, you will learn how Isovalent Enterprise for Cilium can: identify the version of TLS being used, informing us if an obsolete and insecure version is being used report on the cipher being used and 3) export events in JSON format to SIEM.
Kubernetes changes the way we think about networking. In an ideal Kubernetes world, the network would be entirely flat and all routing and security between the applications would be controlled by the Pod network, using Network Policies. In many Enterprise environments, though, the applications hosted on Kubernetes need to communicate with workloads living outside the Kubernetes cluster, which are subject to connectivity constraints and security enforcement. Because of the nature of these networks, traditional firewalling usually relies on static IP addresses (or at least IP ranges). This can make it difficult to integrate a Kubernetes cluster, which has a varying —and at times dynamic— number of nodes into such a network. Cilium’s Egress Gateway feature changes this, by allowing you to specify which nodes should be used by a pod in order to reach the outside world.
With the rise of Kubernetes adoption, an increasing number of clusters is deployed for various needs, and it is becoming common for companies to have clusters running on multiple cloud providers, as well as on-premise. Kubernetes Federation has for a few years brought the promise of connecting these clusters into multi-zone layers, but latency issues are more often than not preventing such architectures. Cilium Cluster Mesh allows you to connect the networks of multiple clusters in such as way that pods in each cluster can discover and access services in all other clusters of the mesh, provided all the clusters run Cilium as their CNI. This allows to effectively join multiple clusters into a large unified network, regardless of the Kubernetes distribution each of them is running. In this lab, we will see how to set up Cilium Cluster Mesh, and the benefits from such an architecture.
Learn how to connect your Kubernetes Clusters with your on-premises network using BGP. As Kubernetes becomes more pervasive in on-premise environments, users increasingly have both traditional applications and Cloud Native applications in their environments. In order to connect them together and allow outside access, a mechanism to integrate Kubernetes and the existing network infrastructure running BGP is needed. Cilium offers native support for BGP, exposing Kubernetes to the outside and all the while simplifying users’ deployments.
In this tutorial, you will learn how to use Azure CNI Powered by Cilium, while presenting you with the various AKS networking options.
Achieving zero-trust network connectivity via Kubernetes Network Policy is complex as modern applications have many service dependencies (downstream APIs, databases, authentication services, etc.). With the “default deny” model, a missed dependency leads to a broken application. Moreover, the YAML syntax of Network Policy is often difficult for newcomers to understand. This makes writing policies and understanding their expected behavior (once deployed) challenging. Enter Isovalent Enterprise for Cilium: it provides tooling to simplify and automate the creation of Network Policy based on labels and DNS-aware data from Cilium Hubble. APIs enable integration into CI/CD workflows while visualizations help teams understand the expected behavior of a given policy. Collectively, these capabilities dramatically reduce the barrier to entry to creating Network Policies and the ongoing overhead of maintaining them as applications evolve. In this hands-on demo we will walk through some of those challenges and their solutions.
Microsoft selects Isovalent and Cilium to power Networking and Security for Azure Kubernetes Service (AKS).
Grafana Labs announces partnership with Isovalent to bring Cilium's eBPF-powered observability for kubernetes and cloud native infrastructure.
Cilium Cluster Mesh: how it provides a single networking, security and observability solution for applications spanning multiple clusters.
Cilium is the first cloud native networking platform to support BBR, an innovative protocol that accelerates network performance.
A tutorial on installing, configuring and observing IPv4/IPv6 Dual Stack with Cilium and Hubble
What do we need to consider when we pick the four golden signals for monitoring Kubernetes environments?
[09:35] In this video, Senior Technical Marketing Engineer Nico Vibert walks through two methods to encrypt data in transit between Kubernetes Pods: Cilium Transparent Encryption with IPsec or WireGuard.
[10:00] In this video, Senior Technical Marketing Engineer Nico Vibert will walk you through how to deploy a IPv4/IPv6 Dual Stack Kubernetes cluster and install Cilium and Hubble to benefit from their networking and observability capabilities.
eBPF-powered Cilium has taken the world of Kubernetes connectivity and security by storm. With their Series B funding, Isovalent will continue to remain the leading force behind the eBPF community and continue the rise of Cilium as the leading technology for Kubernetes networking, security, and service mesh.
Deep Dive on Bandwidth Management with Cilium