Back to blog

It’s DNS. You know it’s DNS. But how do you prove it in your Kubernetes Cluster?

Published: Updated: Cilium
It’s DNS. You know it’s DNS. But how do you prove it in your Kubernetes Cluster?

DNS is a common cause for outages and incidents in Kubernetes clusters. If you’ve read through Kubernetes Failure stories you have a clear sense of how common it is and the impacts. But how do you debug and troubleshoot these DNS issues? How do you conclusively prove that a problem is related to DNS? Finally, how do you help your applications teams help themselves by investigating issues on their own? With Cilium’s capability to troubleshoot Kubernetes DNS!

Let’s dive into the technical details of how to systematically troubleshoot DNS issues in Kubernetes clusters. We can use Cilium’s open source Hubble capabilities to identify and inspect DNS issues as well as set up monitoring so we can locate DNS issues even before incidents occur.

If you are not running Hubble yet, deploy it into your cluster by following the installation instructions.

What is Kubernetes DNS 101?

If you are not familiar with how Kubernetes leverages DNS for service discovery, this section will give you a brief introduction. You can skip this section if you are already familiar with the DNS concepts of Kubernetes.

Kubernetes pods and services are assigned a transient IP address requiring a service discovery mechanism to be used to map the persistent service and pod name to the temporary IP address on the fly. To implement this functionality, Kubernetes assigns a fully qualified domain name (FQDN) to services and pods and configures pods to use CoreDNS. Pods are now able to look up service and pod names using DNS to retrieve the transient IP addresses.

Even though pods are assigned an FQDN as well, it is common practice to perform service discovery via the Kubernetes service name as shown in the diagram below:

k8s dns resolution

Depending on the type of Kubernetes service, CoreDNS will respond with a ClusterIP or with a list of PodIPs directly (headless service). The pod connecting to the service can now initiate a connection to the returned IP address(es). For services of type ClusterIP, the Kubernetes networking layer will automatically translate connections to that ClusterIP to the IP of one of the pods chosen by the service to the PodIP of one of the pods selected by the service as illustrated by step (2) in the below diagrams:

k8s dns resolution 2

If errors occur in this first step, these errors are typically referred to as DNS resolution errors or, more broadly, DNS issues. Errors during the second phase are generally referred to as network connectivity issues.

If you want to learn more about how Kubernetes uses DNS, see DNS for Services and Pods in the Kubernetes documentation.

How to monitor DNS errors in a Kubernetes Cluster?

Network-related errors can be challenging to troubleshoot as most applications will only log a generic timeout error when a network connection fails. Even worse, the cause for the failure can range from application problems, network connectivity issues, misconfigured firewall rules, DNS issues, or a combination of the above and the error messages logged rarely provide sufficient context to differentiate between them.

To assist in monitoring and troubleshooting these errors, Hubble can use Cilium’s Advanced Network Policies to monitor all DNS traffic and to maintain metrics representing DNS error scenarios to troubleshoot. The simplest DNS error scenario is if the DNS server returns an error directly to the application pod. Hubble maintains a metric to keep track of all such errors. You can use a metrics collection stack such as Prometheus and Grafana to collect and graph the DNS error metrics of Hubble using the following query:

sum(rate(hubble\_dns\_responses\_total{rcode!=\"No Error\"}[1m])) by (pod, qtypes, rcode)

Using Grafana, we can generate a graph like the following, which will show the number of DNS errors occurring in the entire cluster at any time:

dashboard failing dns resolution

It is good practice to also set up a Prometheus alert on the number of DNS errors to receive an alert notification when the number of errors exceeds a certain threshold.

But, what if DNS resolution fails without the pod receiving a DNS error? This can happen if the network packets carrying the DNS response are being dropped. You can use Hubble to track a metric that shows the balance between DNS requests and DNS responses over time. Any significant in-balance of this graph indicates DNS requests remaining unanswered:

missing dns responses

Understanding the presence of DNS errors is a vital first step. The next step is to track down the source of DNS errors and identify affected pods.

How to identify Pods receiving DNS errors?

Knowing that DNS errors are occurring is excellent, but we need to know which application pods are being affected by DNS errors. Using the Hubble CLI, we can query the flow history on each node to identify the pods which have received DNS errors.

First let’s set up a port forward to the hubble-relay service using the cilium cli.

Cilium hubble port-forward &

This can also be done with kubectl directly. This is a convenience method in the cilium cli.

We can now use the hubble CLI to query the flow history to extract the names of all pods in the cluster which have received DNS errors in the last 10 minutes:

hubble observe --protocol dns --since=10m -o json | \
  jq -r 'select(.l7.dns.rcode!=null) | .destination.namespace + "/" + .destination.pod\_name + " " + .l7.dns.query' | \
  sort | uniq -c | sort -r

198 starwars/jar-jar-binks-6945758d55-bwrbj unknown-galaxy.svc.cluster.local.
198 starwars/jar-jar-binks-6945758d55-bwrbj unknown-galaxy.starwars.svc.cluster.local.
198 starwars/jar-jar-binks-6945758d55-bwrbj unknown-galaxy.cluster.local.
198 starwars/jar-jar-binks-6945758d55-bwrbj unknown-galaxy.

The output in the above command illustrates a typical example of a particular pod jar-jar-binks-6945758d55-bwrbj in namespace starwars consistently failing to look up the FQDN unknown-galaxy while attempting to resolve all the variants of the DNS suffix search list.

How to track Pod Context in Metrics?

The previous query will not attach any additional context, and the metric will describe the DNS errors and DNS response balance for an entire cluster or node. However, using the programmable metrics, Hubble can be configured to attach additional context to scope the metric by namespace, security identity, or even to individual pods. As an example, the following graph has been configured to label DNS errors with the pod name receiving the DNS error:

pods with dns errors

The Prometheus query used is:

topk(10,sum(rate(hubble\_dns\_responses\_total{rcode!=\"No Error\"}[1m])) by (pod, destination))

See Hubble Metrics Documentation for details on how to configure Hubble metrics to label accordingly.

Counting the number of DNS errors per pod helps to quickly identify which pods are subject to DNS resolution failures but doesn’t yet tell us why the resolution is failing.

How to debug the DNS resolution of a pod

Continuing the troubleshooting process of the previous section, we can use Hubble to retrieve the detailed flow log and gain insights into the entire DNS resolution process.

We can then extract the entire DNS resolution history of that pod as observed in the last minute:

hubble observe --since=1m -t l7 --protocol DNS --label=context=starwars,name=jar-jar-binks -o json | \
  jq -r '.time + " " + .Summary'

2021-06-18T23:38:50.525461586Z DNS Query unknown-galaxy.starwars.svc.cluster.local. AAAA
2021-06-18T23:38:50.525635993Z DNS Query unknown-galaxy.starwars.svc.cluster.local. A
2021-06-18T23:38:50.526696890Z DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy unknown-galaxy.starwars.svc.cluster.local. A)
2021-06-18T23:38:50.526879252Z DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy unknown-galaxy.starwars.svc.cluster.local. AAAA)
2021-06-18T23:38:50.527683920Z DNS Query unknown-galaxy.svc.cluster.local. A
2021-06-18T23:38:50.528109768Z DNS Query unknown-galaxy.svc.cluster.local. AAAA
2021-06-18T23:38:50.529312382Z DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy unknown-galaxy.svc.cluster.local. A)
2021-06-18T23:38:50.530374703Z DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy unknown-galaxy.svc.cluster.local. AAAA)
2021-06-18T23:38:50.531003140Z DNS Query unknown-galaxy.cluster.local. A
2021-06-18T23:38:50.531444077Z DNS Query unknown-galaxy.cluster.local. AAAA
2021-06-18T23:38:50.531864694Z DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy unknown-galaxy.cluster.local. A)
2021-06-18T23:38:50.532348542Z DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy unknown-galaxy.cluster.local. AAAA)
2021-06-18T23:38:50.532901848Z DNS Query unknown-galaxy. AAAA
2021-06-18T23:38:50.533193555Z DNS Query unknown-galaxy. A
2021-06-18T23:38:50.535023614Z DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy unknown-galaxy. AAAA)
2021-06-18T23:38:50.535522029Z DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy unknown-galaxy. A)

The output illustrates nicely how Kubernetes configures a Pod’s DNS Config to search a list of domain names and how each request fails for IPv4 (A) and IPv6 (AAAA) with an error indicating that the corresponding DNS name could not be found.

resolution list

If the DNS server is returning any errors, you will see it in the output.

How to debug missing DNS responses

What if the symptom isn’t DNS errors, but DNS responses are missing altogether? Assuming that the pod starwars/jar-jar-binks-5bcd4b9b9f-cn7vc is failing to perform DNS resolution, we can check for network packet drops from and to that pod by running:

hubble observe -n starwars -l name=jar-jar-binks -o table

TIMESTAMP             SOURCE                                          DESTINATION                              TYPE            VERDICT     SUMMARY
Jun 21 17:56:16.204   starwars/jar-jar-binks-6945758d55-bwrbj:52020   kube-system/coredns-74ff55c5b-8dc2t:53   Policy denied   DROPPED     UDP
Jun 21 17:56:16.204   starwars/jar-jar-binks-6945758d55-bwrbj:52020   kube-system/coredns-74ff55c5b-8dc2t:53   Policy denied   DROPPED     UDP
Jun 21 17:56:21.204   starwars/jar-jar-binks-6945758d55-bwrbj:52020   kube-system/kube-dns:53                  from-endpoint   FORWARDED   UDP
Jun 21 17:56:21.204   starwars/jar-jar-binks-6945758d55-bwrbj:52020   kube-system/coredns-74ff55c5b-8dc2t:53   Policy denied   DROPPED     UDP
Jun 21 17:56:21.204   starwars/jar-jar-binks-6945758d55-bwrbj:52020   kube-system/coredns-74ff55c5b-8dc2t:53   Policy denied   DROPPED     UDP
Jun 21 17:56:21.204   starwars/jar-jar-binks-6945758d55-bwrbj:52020   kube-system/kube-dns:53                  from-endpoint   FORWARDED   UDP

In this example, the cause for DNS resolution is simple. The corresponding UDP packets are being dropped because the packets are being denied by the configured NetworkPolicies.

If the issue is still unclear, the entire network transaction can be retrieved to identify in which exact moment packets are being dropped:

hubble observe -n starwars -l name=jar-jar-binks -o table

TIMESTAMP             SOURCE                                          DESTINATION                                     TYPE           VERDICT     SUMMARY

Jun 21 18:53:44.772   starwars/jar-jar-binks-6945758d55-bwrbj:56419   kube-system/coredns-74ff55c5b-8dc2t:53          dns-request    FORWARDED   DNS Query google.com.starwars.svc.cluster.local. AAAA
Jun 21 18:53:44.774   starwars/jar-jar-binks-6945758d55-bwrbj:56419   kube-system/coredns-74ff55c5b-8dc2t:53          dns-request    FORWARDED   DNS Query google.com.starwars.svc.cluster.local. A
Jun 21 18:53:44.775   kube-system/coredns-74ff55c5b-8dc2t:53          starwars/jar-jar-binks-6945758d55-bwrbj:56419   dns-response   FORWARDED   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy google.com.starwars.svc.cluster.local. A)
Jun 21 18:53:44.776   kube-system/coredns-74ff55c5b-8dc2t:53          starwars/jar-jar-binks-6945758d55-bwrbj:56419   dns-response   FORWARDED   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy google.com.starwars.svc.cluster.local. AAAA)
Jun 21 18:53:44.777   starwars/jar-jar-binks-6945758d55-bwrbj:40863   kube-system/coredns-74ff55c5b-8dc2t:53          dns-request    FORWARDED   DNS Query google.com.svc.cluster.local. AAAA
Jun 21 18:53:44.778   starwars/jar-jar-binks-6945758d55-bwrbj:40863   kube-system/coredns-74ff55c5b-8dc2t:53          dns-request    FORWARDED   DNS Query google.com.svc.cluster.local. A
Jun 21 18:53:44.779   kube-system/coredns-74ff55c5b-8dc2t:53          starwars/jar-jar-binks-6945758d55-bwrbj:40863   dns-response   FORWARDED   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy google.com.svc.cluster.local. AAAA)
Jun 21 18:53:44.779   kube-system/coredns-74ff55c5b-8dc2t:53          starwars/jar-jar-binks-6945758d55-bwrbj:40863   dns-response   FORWARDED   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy google.com.svc.cluster.local. A)
Jun 21 18:53:44.780   starwars/jar-jar-binks-6945758d55-bwrbj:32869   kube-system/coredns-74ff55c5b-8dc2t:53          dns-request    FORWARDED   DNS Query google.com.cluster.local. AAAA
Jun 21 18:53:44.780   starwars/jar-jar-binks-6945758d55-bwrbj:32869   kube-system/coredns-74ff55c5b-8dc2t:53          dns-request    FORWARDED   DNS Query google.com.cluster.local. A
Jun 21 18:53:44.784   kube-system/coredns-74ff55c5b-8dc2t:53          starwars/jar-jar-binks-6945758d55-bwrbj:32869   dns-response   FORWARDED   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy google.com.cluster.local. A)
Jun 21 18:53:44.784   kube-system/coredns-74ff55c5b-8dc2t:53          starwars/jar-jar-binks-6945758d55-bwrbj:32869   dns-response   FORWARDED   DNS Answer RCode: Non-Existent Domain TTL: 4294967295 (Proxy google.com.cluster.local. AAAA)
Jun 21 18:53:44.785   starwars/jar-jar-binks-6945758d55-bwrbj:60229   kube-system/coredns-74ff55c5b-8dc2t:53          dns-request    FORWARDED   DNS Query google.com. AAAA
Jun 21 18:53:44.785   starwars/jar-jar-binks-6945758d55-bwrbj:60229   kube-system/coredns-74ff55c5b-8dc2t:53          dns-request    FORWARDED   DNS Query google.com. A
Jun 21 18:53:44.787   kube-system/coredns-74ff55c5b-8dc2t:53          starwars/jar-jar-binks-6945758d55-bwrbj:60229   dns-response   FORWARDED   DNS Answer "142.250.191.46" TTL: 30 (Proxy google.com. A)
Jun 21 18:53:44.813   kube-system/coredns-74ff55c5b-8dc2t:53          starwars/jar-jar-binks-6945758d55-bwrbj:60229   dns-response   FORWARDED   DNS Answer "2607:f8b0:4005:80f::200e" TTL: 30 (Proxy google.com. AAAA)

The above example shows a complete dns transaction between jar-jar-binks and google.com showing the UDP packet carrying the request, the parsed DNS request, the parsed DNS response, and the UDP packet carrying the DNS response delivered back to the endpoint. If any of them are missing, then you will know where the packet is being dropped.

Historical Data Views + Analytics

Thus far we’ve talked mainly about troubleshooting an issue using current network data. But often, when troubleshooting a connectivity issue, having as much historical context as possible is critical to quickly resolving the incident. For example, comparing connectivity behavior to a baseline from before the incident, or identifying the exact timing of intermittent errors or faults that happened several hours ago.

Where Cilium Open Source primarily deals with the here and now, Cilium Enterprise extends this with Hubble Timescape, leveraging standard cloud storage APIs to store the Hubble flow data-stream and enable later querying and analytics on this data using the same Hubble API, CLI, and UI as is available for live data. Cilium also annotates flow data with additional metadata, such as the details about policies that were applied when a flow was allowed or denied, which further simplifies troubleshooting.

Providing Debugging Capabilities to Application Teams

We’ve also been assuming a member of the platform team is troubleshooting these issues, but often multiple teams are involved. Platform teams often encounter a “finger-pointing problem” as application and infrastructure operations teams struggle to identify if a network layer issue (e.g. DNS lookup failure, network policy drop, TCP layer connection failures/resets) is the likely root cause of the higher-layer alert, or if the problem is likely isolated to the application layer. If application teams are given access to Hubble’s rich streams of data about the health of connectivity between their services, they can quickly investigate application layer issues and alerts. Where open source Cilium focuses primarily on the needs of the Kubernetes Platform teams, Cilium Enterprise leverages the OpenID Connect (OIDC) standard to provide RBAC for Multi-Tenant Observability, securely giving application tenants access only to the connectivity data associated with their Kubernetes namespaces. By providing this capability in a self-service manner, Kubernetes platform teams can avoid being repeatedly pulled in to assist with such tasks and help application teams resolve their issues more quickly and efficiently.

Summary

DNS issues are a frequent cause of outages and incidents in Kubernetes clusters. DNS issues have typically been hard to detect and troubleshoot. In this guide, we have explored how Hubble using Cilium and eBPF can help to identify, track down, and troubleshoot DNS issues in your Kubernetes cluster.

To get started:

Related

Getting Started with Cilium

Cilium is an open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes. At the foundation of Cilium is a new Linux kernel technology called eBPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration. In this interactive, hands-on lab we provide you a fully fledged Cilium installation on a small cluster and a few challenges to solve. See for yourself how Cilium works and how it can help you by securing a moon-sized battlestation in a “Star Wars”-inspired challenge.

Cilium Hubble Cheatsheet – Kubernetes Network Observability in a Nutshell

Getting started with Cilium Hubble, the observability tooling, is now easier with our Cheat Sheet and CLI walkthrough video.

Cilium Hubble Cheatsheet – Kubernetes Network Observability in a Nutshell
Dean Lewis

Reducing Kubernetes tool sprawl: Tietoevry uses Cilium and Hubble

Tietoevry uses Isovalent Enterprise for Cilium to have advanced network policies (DNS!), reduce tool sprawl, and get the necessary insights to monitor the various SLAs.

Reducing Kubernetes tool sprawl: Tietoevry uses Cilium and Hubble
Roland Wolters

Industry insights you won’t delete. Delivered to your inbox weekly.