• Nico Vibert
    About the speakerNico Vibert

    Senior Staff Technical Marketing Engineer

Pod Traffic Rate Limiting with Cilium Bandwidth Manager

[05:15] In this short video, Senior Technical Marketing Engineer Nico Vibert walks you through how to use Cilium Bandwidth Manager to rate-limit the traffic sent by your Kubernetes Pods. Great to address potential contention issues !

Transcript

Welcome to this Isovalent and Cilium Tech Talk series on pod rate limiting. In this session, we’re going to use Cilium features to enforce some limits on how much traffic a port can send. So, we’re going to use a GKE cluster. First, we need to get the CIDR from the cluster, and then we’ll be using Helm to install Cilium. Now, we could also use Cilium install, but I’m using Helm this time. And note that we need a specific flag to enable the bandwidth manager because, by default, it’s disabled. So, we’re using 1.11.6. That feature came in with 1.9, so it just takes a few seconds to install, and we’re good to go.

Now, we’re going to make sure that Cilium is healthy. We’re going to restart the Daemonset, and we’re going to check that the Cilium agents are running fine. One isn’t quite ready, so let’s run this again, and we’re good to go. We can also check that the feature has been enabled correctly, and we can see that it is now.

What we’re going to do now is check that the feature works, and we’re going to do some network performance testing between two pods. The ports are going to be placed in different nodes, and there’s going to be a server and a client. Now, you can see here we’re using annotations to specify the maximum bandwidth the port is allowed to transmit at, which is going to be 10 megabits per second. And then we’re also enforcing this on the client, using anti-affinity to make sure that the client and the server are placed in different nodes.

So, what we’re going to do next is deploy this manifest, check that the ports are ready to go, and then we’ll need the IP address of the server so that we can start the network performance testing. Here we go. The documentation can be found on cilium.io, and you can see we’re running this performance testing, and it will be 9.53, which is just a shade under 10 megabits per second, which is great.

We can also make sure that BPF and rate limiting have been enforced at the BPF layer by running a few commands from the Cilium pod, which is collocated with the net perf server pod. You can see it’s down to the identity 10 megabits per second. It’s been enforced for identity 3825, which is our netperf server. Now, if we just validate this by changing the bandwidth from 10 to 100, and we can reapply the manifest, and then re-run the test and make sure that the bandwidth has been increased accordingly. And 95.40, so again, just a tiny bit under 100 megabits per second. And again, we can also check that this has been enforced using the Cilium BPF commands.

So, that’s it. Again, thanks for watching this brief introduction to Cilium rate limiting. Thanks for watching.