Author
About the speakerNico Vibert

Nico Vibert is Senior Technical Marketing Engineer at Isovalent – the company behind the open-source cloud native solution Cilium. Nico has worked in many different roles – operations and support, design and architecture, technical pre-sales – at companies such as HashiCorp, VMware and Cisco. Nico’s focus is primarily on network, cloud and automation and he loves creating content and writing books. Nico regularly speaks at events, whether on a large scale such as VMworld, Cisco Live or at smaller forums such as VMware and AWS User Groups or virtual events such as HashiCorp HashiTalks. Outside of Isovalent, Nico’s passionate about intentional diversity & inclusion initiatives and is Chief DEI Officer at the Open Technology organization OpenUK. You can find out more about him on his blog.

Pod Traffic Rate Limiting with Cilium Bandwidth Manager

[05:15] In this short video, Senior Technical Marketing Engineer Nico Vibert walks you through how to use Cilium Bandwidth Manager to rate-limit the traffic sent by your Kubernetes Pods. Great to address potential contention issues !

Transcript

Welcome to this Isovalent and Cilium Tech Talk series on pod rate limiting. In this session, we’re going to use Cilium features to enforce some limits on how much traffic a port can send. So, we’re going to use a GKE cluster. First, we need to get the CIDR from the cluster, and then we’ll be using Helm to install Cilium. Now, we could also use Cilium install, but I’m using Helm this time. And note that we need a specific flag to enable the bandwidth manager because, by default, it’s disabled. So, we’re using 1.11.6. That feature came in with 1.9, so it just takes a few seconds to install, and we’re good to go.

Now, we’re going to make sure that Cilium is healthy. We’re going to restart the Daemonset, and we’re going to check that the Cilium agents are running fine. One isn’t quite ready, so let’s run this again, and we’re good to go. We can also check that the feature has been enabled correctly, and we can see that it is now.

What we’re going to do now is check that the feature works, and we’re going to do some network performance testing between two pods. The ports are going to be placed in different nodes, and there’s going to be a server and a client. Now, you can see here we’re using annotations to specify the maximum bandwidth the port is allowed to transmit at, which is going to be 10 megabits per second. And then we’re also enforcing this on the client, using anti-affinity to make sure that the client and the server are placed in different nodes.

So, what we’re going to do next is deploy this manifest, check that the ports are ready to go, and then we’ll need the IP address of the server so that we can start the network performance testing. Here we go. The documentation can be found on cilium.io, and you can see we’re running this performance testing, and it will be 9.53, which is just a shade under 10 megabits per second, which is great.

We can also make sure that BPF and rate limiting have been enforced at the BPF layer by running a few commands from the Cilium pod, which is collocated with the net perf server pod. You can see it’s down to the identity 10 megabits per second. It’s been enforced for identity 3825, which is our netperf server. Now, if we just validate this by changing the bandwidth from 10 to 100, and we can reapply the manifest, and then re-run the test and make sure that the bandwidth has been increased accordingly. And 95.40, so again, just a tiny bit under 100 megabits per second. And again, we can also check that this has been enforced using the Cilium BPF commands.

So, that’s it. Again, thanks for watching this brief introduction to Cilium rate limiting. Thanks for watching.