• Nico Vibert
    About the speakerNico Vibert

    Senior Staff Technical Marketing Engineer

BGP on Cilium

[14:24] In this video, Senior Technical Marketing Engineer Nico Vibert walks through BGP enhancements in Cilium 1.12, with the integration with GoBGP. This new version also introduces support for BGP over IPv6.

Transcript

Welcome to this Isovalent and Cilium Tech Flash on BGP on Cilium 1.12. Now, we’ve actually had support for BGP on Cilium for a little while. We’ve actually been running MetalLB to provide BGP services on Cilium to connect your Cilium-managed pods to your traditional networking environment. You can read more about this in the 1.10 blog post.

So, what it gave us at the time is the ability to advertise the load balancer’s IP and also the pod CIDR back to your on-premises environment, and to enable communications without the need for another tool. So, that worked perfectly. But what we found is that customers were asking for additional features such as IPv6 and segment routing, and also advanced routing capabilities that were not available in MetalLB or were still in experimental mode.

So, we’re actually moving towards a new implementation of BGP, and we’re going to be migrating from Metal LB to GoBGP. And it gives us the ability to support these additional features. We’ll also see some changes in terms of the CRD used for BGP neighbor relationships and pod advertisements, and also to enable BGP on Cilium. We’re using a slightly different configuration and flags. Now, let’s go into a demo and take it from there.

Demo starting with K8S cluster deployment with Kind

So, to start with, we’re going to build a simulation of a representation of a traditional data center environment. Where you can see on the bottom side, you’ve got the core network, and it’s attached to a couple of top-of-rack switches. We’re going to be using BGP to connect from the top of rack to our core routers. But what we don’t have yet is a connection from the top-of-rack switches to the Kubernetes pods. And what we want to be able to do is advertise the pod CIDR straight into the core network, which would really simplify connectivity from the traditional data center environment into our Kubernetes environment. It just gives us the ability to expose and integrate Kubernetes services with the rest of your network.

Now, again, to start with, we don’t have any BGP, and we’ll be setting up BGP on Cilium as part of the demo. What I’m using for the demo is a platform called Containerlab, which enables us to replicate a virtual networking environment using open source software, and we’ll also be using Kind to create our Kubernetes and Cilium environment.

So, to start with, we’re going to use Kind to create our cluster. Let’s just have a quick look at our Cilium configuration. We’ve disabled the default CNI by default, because we’ll be installing Cilium on it shortly. Now we’ve got a few worker nodes and a main control plane, and you can see we also have some labels, rack0, rack0, rack1, and rack1, which is again going back to the picture we highlighted a bit earlier. So, this essentially represents our BGP agents.

Let’s create the cluster. It should take a couple of minutes, but I’ll speed up as part of the demo. Okay, our Kind cluster is ready.

A brief overview of containerlab and a containerlab deployment

So, what we’ll be doing now is deploying our virtual routers, and we’re using Containerlab. And Containerlab is a platform that creates a virtual networking environment using virtual machines or containers and mostly using open source software like FRR.

So, what we will be deploying if we go through this topology is essentially deploying a virtual Linux-based router. We’ve got our top-of-rack, using FRR for this, and you can see we’re setting up some BGP, which will be the peering sessions with the Cilium nodes. Some BGP here, and you can see as well as part of the configuration, we are creating some virtual networking between our top-of-rack routers and the Cilium nodes.

Cilium Installation

Let’s go ahead and deploy this environment. It just takes a handful of seconds, and we’re good to go. Now, we can install Cilium, and we’re going to be using Helm for this. If we just look at some of the Helm flags, we’re actually using a BGP control plane enabled flag, and that’s the only setting we need. We’ll be using the new 1.12 version for this. Here we go, we’re good to go.

Now, by this stage, we’ve got BGP working between our top-of-rack routers and the core network, but there’s no BGP configured on Cilium yet. So, we’re going to have to create some peering policies and apply them using Kubernetes. But if we look at the routing table and the BGP neighbors, we’ll see that it’s not quite ready yet. So, the way to do it with Containerlab is to actually access the container, router0. The core network has peering sessions with tor0 and tor1. And if we just zoom out, you can see we’re learning some prefixes. We’re learning three prefixes from each neighbor. If we log on to router zero again, if we run a show BGP command, we’ll see that we are learning six routes from the core router zero, but the session with the Cilium agents is still inactive, so it’s not established yet. And that’s what we’re going to be bringing up. Again, if we run the same command on the other top-of-rack router, we’ll see the session isn’t quite up and running yet.

Cilium BGP Policies Deployment

Now, let’s have a look at a Cilium BGP peering policy. It’s actually pretty simple to set up your BGP with Cilium once you’ve enabled the feature. What we’re actually using is a label-based policy model. You can see we have labels here like rack:zero, and if you recall, when we use kubectl to deploy our cluster, we use that label, so you can apply the same policy to multiple nodes with a single configuration. And of course, you can specify your traditional BGP configuration, your local autonomous system number, and your remote one and of course, the remote IP address, which is the loopback IP of the remote device.

Now, we also have this export podCIDR, which is the flag that would advertise all the podCIDR, which is assigned to the node. You don’t actually need to know the actual podCIDR, you just need to toggle this flag, and all the pod IP address ranges will be advertised. We’ll get the same for the rack one. It’s a very similar configuration, and what you can also do with 1.12 is advertise IPv6 CIDR and also connect to a remote IPv6 peer, again using the parameter in your configuration here.

Now, let’s go ahead and deploy the policy. Now, let’s go and log back to our routers and see if the BGP session has now come up, and it has.

We’ve now seen that after 20 seconds, we’ve got a BGP session up, and we are receiving one prefix, which is the network, the local network of each pod CIDR. Again, if we run the same command on top-of-rack one, we should also see an active peering session, and we are receiving prefixes, and of course, we are sending prefixes of our own, so the BGP session has been established.

And if we log back to the core network, you can see that we’ve learned more prefixes and are exchanging more prefixes with all our top-of-rack devices. And the core network is now able to see and access the networks that are the pod CIDR networks, which are listed here.

So, that’s it really. It’s a pretty simple demo, it’s a very simple configuration. You can enable BGP on Cilium with just one flag and an advertising policy CIDR and the local routes, and establishing BGP enables is just a handful of lines of YAML. And as you can see, once you apply the specification with kubectl, it will be picked up by the Cilium agents, and a virtual networking instance will be created. Peering will be established, and then network routes will be exchanged, which again establish connectivity between our Kubernetes environment and our on-prem network, simplifying our connections. And that’s it for today. At the end of the demo, thanks for watching, and have a great day.