• Youssef Azrak
    About the speakerYoussef Azrak

    A confident reliable, flexible and trilingual IT engineer with extensive practical experience and the necessary drive and determination needed to resolve complex network issues. Also possessing effective organisational skills and excellent working knowledge of networking technologies, I have a commitment to keep up to date with the latest developments. Enjoying being part of a team as well as managing, motivating and training people, I like to thrive in highly pressurized and challenging working environments. My multicultural and multilingual backgrounds allow me to provide fundamental customer relationship management as well as great communication skills.

Cilium Tech Talks – HA FQDN

[11:10] In this demo by Youssef Azrak, you will learn about the HA DNS Proxy feature of Isovalent Cilium Enterprise.

Transcript

So, we are going to talk about the HA DNS proxy. But before jumping into the HA DNS proxy, let’s try to understand why we have a DNS proxy with Cilium. The reality is that we always hear “It’s DNS” and let’s take Kubernetes cluster. By default, in a Kubernetes cluster, everything is allowed from a security standpoint, which is not the best obviously. We have some tools that allow us to filter and lock down the cluster. Typically Kubernetes network policies will allow you to filter based on different criteria such as IPs, pods, destination IPs, etc. If we take the example of a workload, or workloads, that needs to reach an S3 bucket on the internet, if the administrator has to put each time the IP address of the S3 URL, that would be very problematic simply because in today’s cloud native world and Kubernetes clusters, the IP addresses are ever changing. It doesn’t make any sense to start adding by hand the IPs of the destinations.

This is where Cilium network policies come in. With Cilium network policies, you can extend Kubernetes network policies and add for example layer 7 DNS policies. These policies allow you to specify an FQDN and say, “I want to allow traffic to this S3 bucket for this namespace or these specific parts.” This is made possible by using a DNS proxy in the background, which allows us to have a default deny security model where everything is denied except what you explicitly allow.

Now, the question is why do we need an H8 DNS proxy? With the Cilium OSS version, the DNS proxy is embedded inside the Cilium agent. This means that if you want to install the new Cilium service mesh, you will probably need to upgrade it in an existing cluster using an installation process. This involves shutting down each pod and bringing up a new one on each node.

The time it takes for the pod to go down and the new one to come up is when you will lose the proxy. To avoid this, Cilium uses eBPF to forward traffic and has the eBPF plumbing in place for IPv4 and IPv6. However, as the name suggests, the DNS proxy intercepts packets and if the agent is down, the proxy is not able to intercept the packets anymore. This is where the HA DNS proxy comes in. It takes the logic of the DNS proxy in the Cilium agent and puts it in an external DNS proxy called the Cilium DNS proxy. This allows for no downtime, because even if the Cilium agent is down or experiencing issues such as a lack of resources, the Cilium DNS proxy can take care of DNS proxy packets. So, if you’re doing an upgrade and don’t want people to access certain websites, this will still be enforced within your environment. In the background, the Cilium DNS proxy, the HA DNS proxy, will listen to the same socket as the Cilium agent. The kernel will take care of load balancing the DNS proxy packets to either the Cilium agent or the Cilium DNS proxy.

I’ll just refer to the Cilium DNS proxy as the “Cilium proxy” to make it easier. The Cilium agent has some specific features. So let’s take a look at the life cycle of a packet on a pod that needs to go to an S3 bucket. The first thing that will happen is that a DNS request will be made to determine the IP address of the URL. This request will be proxied to the Cilium DNS proxy, which will check first. if there is an FQDN policy that allows or denies this URL. If it is denied, the client will receive a refused or deny packet, and the Cilium DNS proxy will sync with the Cilium agent for observability purposes and let it know that a drop request was made. If it is not denied, the Cilium DNS proxy will forward the packet to the DNS servers to get the IPs and sync with the Cilium agent with those IPs.

The Cilium agent will then create the data path for this packet, and the client will get the response and be able to send traffic to the URL. As shown in the diagram, the Cilium agent and the Cilium DNS proxy are HA, meaning that if the Cilium agent goes down, you can use the Cilium DNS proxy, and if the Cilium DNS proxy goes down, you can use the Cilium agent. The FQDN cache will still be in sync even after one of the two comes back online. So, for example, if you make a couple of requests on the Cilium DNS proxy and the Cilium agent goes down, once the Cilium agent comes back up again, it will sync all of that information.

You should be able to see my screen. Okay, so here’s what I have: a Kubernetes cluster running on EKS with Cilium agents running and the Cilium DNS proxy running. You can see I have three nodes, three Cilium DNS proxies, and three Cilium agents in this cluster. I also have a namespace called “youssef” where I have a pod that I’ll be using for my testings. I also have a Cilium network policy. This Cilium network policy is a layer 7 DNS policy, and to make it very simple, it allows all DNS requests to be made from this pod to the CoreDNS pods – to the DNS servers – and also allows traffic to flow to two FQDNs: google.com and cilium.io on ports 80 and 443. So, what’s going to happen if we do a “curl” on a website like bbc.co.uk and let me maybe enable some Hubble observability and see what’s going on.

You can see right now for bbc.co.uk, the DNS request is allowed, but we have a policy denied, so the traffic is being dropped for this policy. This is completely normal and expected. If we go to google.com now, the traffic is allowed, and we can go there. Same with cilium.io – we can reach out to those websites. If I do a “dig” DNS request on bbc.co.uk, we should be having an answer, which is fair enough. Now, I’ll go to the daemon set and I will simulate the agents being down. To do that, I’ll patch the daemon set and put a false tag or fake tag for the images, so all the agents will be in a crash loopback state, or more specifically, an image pull back off. This way, I’m sure that all the agents are down. Yeah, there is one that is not down. One sec, I’m just putting the name of the pod just to make sure that all the pods are down. And now I’ve lost the mic connection to Hubble observe because it’s using the Cilium agent, and as we don’t have any more Cilium agents up, it cannot reach out and get the information. So now, as you can see, no more Cilium agent – everything is down, but I still have the Cilium DNS proxy, and if I do again “curl” on cilium.io, you can see that it will still be working. Same with google – still working, and this is being taken care of by the DNS proxy. And if I do a “curl” on bbc.co.uk, this is still basically denied. So, what happened here is that even having all the agents being down, which is like an extreme scenario, we are still able to enforce the policies that we have in our cluster, which renders the whole default deny security mode much more robust. And even if we do upgrades of the Cilium agent, of our CNI, we are still able to enforce those policies.