• Raymond de Jong
    About the speakerRaymond de Jong

    Field CTO

Cilium Cluster Mesh Demo

[07:18] Workloads usually across multiple Kubernetes clusters - on premises and clouds. How do you bring them together? With Cluster Mesh! This video by our Raymond de Jong briefly explains the concept, the requirements, and walks through a demo of the capabilities.

Transcript

Hello and welcome to this Cilium cluster mesh demo. My name is Raymond De Jong, and I’m a senior solutions architect at Isovalent. Cilium cluster mesh has a number of use cases. First of all, it provides the capability for connecting multiple Kubernetes clusters together and to provide service connectivity across those clusters. You can obviously also create them across multiple cloud providers, so the ability to run services connectivity across your cloud providers for your endpoints running in those cloud providers. Another use case is the capability to provide centralized services. You may have a central cluster which you want to expose a service towards different clusters, and we have an example of how that looks like in a bit later. In terms of security, Cilium cluster mesh gives you the capability to enforce your security policies across Kubernetes clusters using Cilium network policies. Finally, Cilium cluster mesh provides visibility of flows across your clusters using Hubble, which is built on top of Cilium.

This is an example of connecting multiple Kubernetes clusters together with cluster mesh. So in this case, two Cilium clusters configured with a front-end and a backend, and there is a backend service to provide connectivity to the endpoints for the front-end endpoints using cluster mesh. You can announce those endpoints and their available IPs across your cluster towards other clusters, meaning that that service will provide connectivity to endpoints across clusters. That means that, as shown in this example, once in the right cluster, the backend ports are failing for whatever reason, the traffic hitting the backend service will fail over to the remaining endpoints in the other cluster, which are still available.

Another example of using cluster mesh is providing centralized services connectivity for your Kubernetes clusters. In this example, a Vault service is configured in a centralized shared services cluster, and the service for this fault is being exposed to all other Kubernetes clusters using Cilium cluster mesh.

In order to be able to run customers across your Kubernetes clusters, it’s important to consider the following requirements. First of all, you need non-conflicting CIDR ranges and unique IP addresses between your clusters. You need IP connectivity between your clusters using, for example, a VPN. What’s important here is that the nodes are able to communicate with each other across clusters. Consider using a load balancer for your cluster mesh API server. By default, a customer’s API server is exposed through a node port, and obviously the life cycle of that API server is bound to the life cycle of a node. Therefore, it’s recommended to implement some kind of load balancer solution to expose the cluster mesh API server. And finally, the network between clusters must allow the inter-cluster communication.

So typically across clouds, you will have firewalls preventing traffic being allowed ingress and egress. For customers, it’s important that you open the required ports between your clusters for the traffic for Cilium cluster mesh to flow freely.

Let’s now move on to the lab. I’ve configured two Kubernetes clusters and these clusters are now configured using Cilium cluster mesh. That means that I can now create services to be available across clusters. I’m using an example which is also available on the Cilium.io documentation website, which is a Star Wars-based example using a rebel-based service with purple-based endpoints and an X-wing deployment which is able to access the rebel-based service on port 80. What I’m highlighting here is the configuration of the service as a service type cluster IP, listening on port 80. I also prepared a cluster mesh test script which will do a curl from an X-wing pot in a cluster towards the service, and it’s doing that in both clusters so we can check the connectivity across clusters.

So in each cluster, I’ve created a namespace with the same name and deployed the rebel base and X-wing deployments and servers there, and the service is available as well. So as you can see right now, using a default service in each cluster, that basically means that the connectivity is limited to endpoints in that given cluster. As you can see right here, enabling connectivity of your services across clusters means that you need to annotate the service using a special annotation to mark it as a global shared surface. And this is the annotation: io.cilium/global-service equals true, and you need to do that across both clusters.

Then we apply that configuration. And now the service has been changed in cluster one. You can see the annotation is there. And we need to do the same in cluster two. And as you can see, the annotation is also there in cluster 2. Now, each service in each cluster has the annotation global service true, and this ensures that Cilium will advertise the endpoints to the other cluster so that a service in each given cluster can load balance not only to its local endpoints but also to the remote endpoints in other clusters.

So in this example, we can try the script again, and you will see that for each cluster, responses come from different clusters, meaning that the service in each cluster will be able to respond. Now, you won’t see the endpoints of other clusters when you do a described service, so you would only see the local endpoints. But you can actually see in the Cilium BPF map that the endpoints are advertised for that

To check that you can see the endpoints advertised for a service in the Cilium BPF map, you can use the command kubectl execute to execute in the daemon set for Cilium. Just remember to use the correct namespace.

If you take a look at the actual service IP, listening on port 80, you will see that it has both local and remote IP addresses as endpoints. This demonstrates that Cilium is able to load balance traffic for your services across clusters.

If you would like to learn how to set up Cilium cluster mesh yourself, I recommend visiting Cilium.io/learn to find links to documentation and tutorials to set it up. There are also examples included on how you can test a global service, like the one shown in this demo.

This concludes the demo. Thank you for watching, and until next time, bye bye.