Back to blog

Isovalent Enterprise for Tetragon: Deeper Host Network Observability with eBPF

Dean Lewis
Dean Lewis
Published: Updated: Isovalent
Tetragon Network Observability Header

Data is the new gold, we’ve all heard this before. If data is the new gold, how do we process data into a precious metal? Raw data is akin to unrefined ore – there are valuable elements mixed with impurities. How do we purify the volume of the data we collect?

In this blog post, we will focus on how Tetragon, powered by eBPF, can provide network observability directly from the Kernel of your platform. We’ll walk you through example use-cases such as bandwidth, latency, and DNS monitoring, from the host, from the pod, and also from the binaries running inside of the containers!

With those questions in mind, let’s turn to observability of a platform, and all the challenges that are presented. One of the most common challenges I have come across in my career is when the existing tooling is unable to provide the right level of data. Typically, network monitoring solutions are designed to be consumed off the shelf, provide reporting capabilities, and take a biased view of the objects they are responsible for, depending on the platform component they are monitoring. 

This is not necessarily a bad thing! Hubble, the observability component that works hand in hand with Cilium, is designed to expose the underlying network flows between workloads and other identities. It is engineered to support not only cloud network teams but also the application teams who consume the platform, providing easy-to-consume visibility of the networking aspect of their workloads. 

However, there are some platform owners that need more granularity, either for specific use-case, such as multicast, SRIOV, or Segment Routing IPv6, to name but a few examples.

To summarize, a leading developer platform and Isovalent customer once commented:

“We spend millions in software and hardware, and we still don’t know what is going on with the overlay network, what connections are made by which process!”

At Isovalent, we’ve felt that pain, during our years of working across platforms and building in the network space. This is why we’ve built Tetragon. 

What is Tetragon?

Tetragon provides host-based visibility using eBPF, with smart in-kernel filtering and aggregation logic for minimal overall overhead. It achieves deep visibility without requiring additional hardware, application, or platform changes. 

Tetragon evolved from features that were created as part of Isovalent Enterprise for Cilium, from here it was open-sourced as a sub-project, as part of the Cilium Project. This means that Tetragon is developed alongside Cilium, and can be deployed either standalone or together with Cilium, in both Kubernetes and host based environments. This flexibility allows users to adopt Tetragon regardless of their platform and supporting systems.

Tetragon allows for deep kernel observability by applying high-level eBPF policies (try one out!) with the Tetragon agent; these policies sit at the kernel level, constantly monitoring for events of interest. Once a security-significant event is observed, the eBPF policy is triggered and executes a defined action, actions can span from reporting the event to user space, aggregating metrics about the observed event, or even taking action to stop the event from executing. 

tetragon overview diagram

Some of the key advantages of Tetragon:

  • Associate network and process data
    • Ever wondered why a host is so chatty on the network? And which specific application or binary is causing this? Tetragon provides process information down to the binary, network events, and parent process information and can generate ancestry tree information for full visibility.
  • Minimal overhead with eBPF
    • eBPF filtering allows for meaningful observability by capturing events directly from the kernel with minimal CPU and Memory overhead. Check out the benchmarks in this recent report.
  • Depth of Network Data, regardless of protocol
    • Decode TCP, UDP, TLS, HTTP, DNS, and more whilst matching to process ancestry information. For Kubernetes environments, enhanced metadata is provided, such as Labels, Pod Names, etc.
  • Easy to install, config and adopt
    • No application or code changes are required. No further network devices to collect and parse traffic. With data and events parsed using eBPF, existing observability solutions such as Grafana can be used to visualize the data, making it accessible to Application teams in their existing tooling. 

Building upon the open-source version of Tetragon and its ability to generate event and metric based data from the Kernel, Isovalent maintains and offers Isovalent Enterprise for Tetragon, which offers additional unique enterprise capabilities on top of the open-source offering. Let’s continue to dive into those enterprise features in this blog post, that help us gain further Kernel-level visibility of the protocols our systems are using, and real-world use cases where troubleshooting is improved from this level of detail. 

Understanding Network Protocol Parsing with Isovalent Enterprise for Tetragon

In this blog post walkthrough, we are focusing on network eBPF-powered observability capabilities of Tetragon, using the protocol parsing capabilities available in Isovalent Enterprise for Tetragon, which provides a clean and simple approach to configuring which events to collect from your platform, whether that is a single standalone bare metal host or a full scale Kubernetes cluster. 

Below is an example policy that will enable the Tetragon parser to parse the relevant protocols (TCP, UDP, DNS) events and Interface events.

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: network-tracing
spec:
  parser:
    dns:
      enable: true
    interface:
      enable: true
      packet: true
    tcp:
      enable: true
      statsInterval: 20
    udp:
      cgroup: true
      enable: true
      statsInterval: 20

In addition to providing deep insights into network events, such as network connections made by processes, Tetragon’s Network Observability includes information related to the listen system call. This provides the capability to conduct passive network analysis, such as auditing all of the listening ports, and all sockets from each process from eBPF events directly, rather than running an active nmap or other port scans.

After applying the example Network Observability Policy above in an environment, process_connect, process_close, process_accept, process_listen events will be generated for TCP and UDP sockets.

Additionally, the DNS parser will assist with translating machine addresses into human readable FQDNs, so we can easily understand which systems are communicating with one another and externally from our platform. 

Now we have deeper observability from the low level of the kernel in our system. Let’s focus on some use-cases using the event data alongside the Tetragon Network Observability dashboards. 

Gain deeper networking troubleshooting with Tetragon and Grafana Dashboards

Before we break into some of the use-cases covered by our dashboards, we wanted to take a high level look at the flow and components of the dashboards. We believe this to be as important as the metrics from Tetragon! 

When creating these dashboards with troubleshooting in mind, we thought back to our engineers’ experiences as SREs of platforms and beyond. 

Each dashboard, where relevant, has the same set of data filters available, covering cluster, namespace, workload, pod, node, binary, and remote DNS name. Providing you with an easy way to chop, change, and dive into the data visualization, allowing you to capture patterns, and then dive deeper into only those workloads and processes, removing noise from the rest of the platform events. 

Each dashboard is split into sections, overview, Transmit/Receive by Kubernetes workload/Node & Binary, Drops by Workload/Node & Binary. And again, the same information is by Remote DNS Name as well. 

Separate dashboards contain the same layouts and information across TCP/UDP Latency and Throughput and HTTP Golden Signals. There are also dashboards that focus directly on Host Binaries, instead of Kubernetes workloads as well. 

Ultimately providing a set of consistent dashboards across protocols and network characteristics.

Now let’s focus on those use cases and see the dashboards in action!

Which application consumes all the bandwidth?

Even if you are running your platforms in a cloud provider, bandwidth, although elastic, is not infinite, and typically egress traffic has a cost associated!

The first dashboard we will dive into is the “TCP Throughput – Socket” dashboard. It covers all the metrics and information you need to understand the transmit and receive of TCP traffic across your platform, with the ability to use the filters to pinpoint only traffic you are interested in, from namespace, workload, pod, node, binary, and remote DNS name. 

The first panel of the dashboard covers an overview of Bytes and Segments, sent and received, with line graphs providing a visualization to help spot changes in the patterns of traffic that may lead to errors. 

Displaying TCP segments as well as Bytes is important, as it tells a wider story of the network health. TCP divides the payload data into chunks to transmit across the network, when a TCP header is added to these chunks for tracking purposes, it becomes a segment. When segments are lost on the network during transmission, this can lead to reduced throughput, increased latency and adversely affected application performance.

As we scroll through the rest of the dashboard, you will see the following panels, which are available in most of the Tetragon network dashboards, leading to consistency for easier troubleshooting:

  • Transmit by K8s workload and binary
  • Receive by K8s workload and binary
  • Retransmits and drops by K8s workload and binary
  • Transmit by Node and Binary
  • Receive by Node and Binary
  • Retransmits and drops by Node and Binary
  • Transmit by Remote DNS Name
  • Receive by Remote DNS Name
  • Retransmits and drops by Remote DNS Name

In the spirit of dogfooding, I am going to show you how I’d monitor our own applications using Isovalent Enterprise for Tetragon. Diving into some of these panels of the dashboard, I’ve set filters to show only traffic from three namespaces; clickhouse-operator, hubble-enterprise & hubble-timescape. These namespaces cover the observability applications Hubble and Hubble Timescape as part of a Isovalent Enterprise for Cilium platform. 

We can see that the most active workload and binary combination is that of the ingestor as part of Hubble Timescape. The ingester is a deployment responsible for loading data from object storage into the ClickHouse database for indexing.

It’s no surprise that not only does this workload and binary dominate the bytes sent, but also the number of segments sent and the size of those segments. 

From the description above, it is expected to confirm that the received bytes panel will show the Clickhouse workload and binaries. Looking at the below screenshot, we can see that this is true. And also the largest segment size received is also that of the Clickhouse workload. 

This dashboard also breaks down retransmits and drops information too. This can happen in a network and applications for a variety of reasons, such as exhaustion of resources, rate limiting, or external factors such as DNS! (We even dive into DNS troubleshooting further in the blog post below.)

A spawn of this TCP Throughput dashboard is a secondary dashboard that has the exact same focus, however, is designed so that you can filter for Pod to Pod communications. 

Sticking with the same application workloads, Hubble Timescape Ingester and Clickhouse, the filters below have been configured on this dashboard, specifying the namespace, the workload, and the pod. 

The rest of the dashboard will reload and show network metrics between the two pods, identified at the A and B selections. 

Here we can see the ingester pod consistently sending data to Clickhouse, with Clickhouse acknowledging that data.

For each of the breakdowns shown here, we have covered TCP, with dashboards covering the same metrics for UDP also available.

There’s latency in my application, where is it coming from?

Arguably, latency is one of the biggest challenges to define and understand in a platform, sometimes it breaks things completely, and sometimes it just leads to a poor end-user experience and an increase in service desk tickets. 

Using the dashboard “TCP Latency – Socket”, we can dive into the TCP metrics gathered from the Kubernetes hosts and pods. This dashboard uses Smoothed Round Trip (SRTT) time to understand the latency within the Kubernetes platform, providing a view of the full round trip of application communication, using an algorithm to remove any anomalies from the reported values. 

In the below dashboard view, no filters are currently defined, meaning we are given an overview of our full Kubernetes cluster. 

The initial overview panel in the dashboard displays a histogram covering individual buckets, highlighting where most packets report their latency in microseconds (x-axis figure). Here we are expecting to see that the majority of packets will be in the lowest latency bucket (far left). 

To the right of the histogram, is the heatmap which plots packet latency over the configured dashboard timeframe. In a well performing system, the heatmap should “glow” along the lowest latency figure reported. However, in the environment below, we can also see there is a growing number of packets reported around the 1ms mark.

Overall, the median SRTT is within the microseconds range, which is deemed acceptable for this platform, however using the 95th Percentile figure, we can see that 5% of TCP traffic is hitting 1.65milliseconds of latency, with the right hand line graph charting out the pattern of the latency over the provided dashboard time frame. We can see that currently, there has been no spike or large change in the reported platform latency.

To continue troubleshooting a specific application, using the dashboard filters, first by namespace, we can observe that over 50% of the P95 SRTT latency is coming from this particular namespace. 

In the quantiles line graph, we can see that the reported P95 figure ranges from 500 microseconds to 2.5 milliseconds. For latency sensitive applications, this could cause instability for data processing.

Using the “SRTT Quantiles by K8s Workload & Binary” panel within the dashboard, we can easily pinpoint which components of the application within the namespace are contributing to the increased reported latency. 

We can see that two components of our application are showing latency in the milliseconds. The most important is the “loader” component, which sends data to Kafka, its median latency is 4ms and above. For the Kafta component, we can see that its median latency is within the microsecond range. However, the P90 and P95 values tell us that this component occasionally suffers higher latency, and due to understanding how the application components communicate with one another, our starting point to improve the performance of the application will be to focus on improving the response times from the Kafka element.

Once again, with this example, we focus on TCP traffic. However, the same dashboards and information are also available for UDP traffic.

Is it DNS?

DNS is always a special one! Especially in a Kubernetes environment, where we care about service names for lookups and queries because IP addresses are no longer fixed in place. 

Diving into DNS issues can be troublesome, previously I’ve written about how I used Cilium Hubble to spot DNS resolution issues in my own environment. 

How can we spot DNS issues using the Tetragon Network Observability dashboards? That can be achieved in two ways. The first is a dedicated dashboard covering DNS resolution from the applications running in your environment. IP address to DNS name resolution is captured by the DNS parser, as you will note in the example tracing policy configuration provided earlier in this blog post. 

Before we look at the specific DNS dashboard, I want to keep focusing on the TCP Latency dashboard covered in the last section. 

Each dashboard allows you to filter on a remote DNS name, the location where the application binary is trying to communicate. 

In the below screenshot, I’ve filtered all requests to AWS Route53 FQDN, and we can see clearly in the screenshot that the only workload making this request is External-DNS. However, the latency of responses is quite high, hovering around 75ms over the past two days (dashboard timeframe).

Of course, I could use this filter to identify the path of any type of traffic, not just DNS!

Now let’s look at the specific Tetragon DNS dashboard. 

Again we have similar filters available to dive into DNS requests across the platform, or from specific namespaces, workloads, and binaries themselves. 

The overview panels for the dashboard cover the high level health of DNS requests and responses based on your top level filters. Here you are expecting to see somewhat uniform graphed output in a health system. 

The second half of the dashboard allows you to drill down into the DNS Names that are requested in your platform, most importantly showing which names are requested that do not exist, therefore causing DNS error messages. 

In the below screenshot, we can see that the main DNS errors come from lookups where the domain suffixes of the platform are automatically added to the lookup.

Learn more and try Tetragon now!

In this deep dive, we’ve covered how Tetragon can be used for network observability, with the walkthrough covering a Kubernetes environment. This same level of detail is also possible for your standalone hosts as well, not just Kubernetes environments! At Isovalent, we also have customers using the enterprise offering of Tetragon for their Linux host workloads as well. 

The dashboards covered in the blog post will be available as part of Isovalent Enterprise for the Tetragon 1.13 release. To get early access, reach out to the Isovalent Team or come and see us at KubeCon Paris for a live in person demo at our booth!

To continue learning more about Tetragon:

  • In the Tetragon 1.0 blog post, dive into the overhead benchmarks, early production use cases, and how adopters are benefiting from an enhanced security and observability provided by Tetragon. 
  • Get Started with the free Isovalent Hands-on-Labs and explore use-cases such as TLS visibility, Network Policies, Transparent Encryption, Mutual Authentication, Runtime Security, and Enforcement with Tetragon. 
  • Learn about Tetragon via our dedicated videos.
  • Discover more about the powers of eBPF with the book authored by Liz Rice from Isovalent.
Dean Lewis
AuthorDean LewisSenior Technical Marketing Engineer

Related

Tetragon 1.0: Kubernetes Security Observability & Runtime Enforcement with eBPF

Tetragon 1.0 - What is new? Performance overhead benchmarks, default observability policies, kubectl exec monitoring, and much more!

Tetragon 1.0: Kubernetes Security Observability & Runtime Enforcement with eBPF
Thomas Graf

Getting Started with Tetragon

Security Observability is a new paradigm that utilizes eBPF, a Linux kernel technology, to allow Security and DevOps teams, SREs, Cloud Engineers, and Solution Architects to gain real-time visibility into Kubernetes and helps to secure your production environment with Tetragon. Tetragon is an open source Security Observability and Runtime Enforcement tool from the makers of Cilium. It captures different process and network event types through a user-supplied configuration to enable security observability on arbitrary hook points in the kernel; then translates these events into actionable signals for a Security Team. The best way to learn about Security Observability and Cilium Tetragon is to read the book “Security Observability with eBPF” by Jed Salazar and Natalia Reka Ivanko. And the best way to have your first experience with Tetragon is to walk through this lab, which takes the Real World Attack example out of the book and teaches you how to detect a container escape step by step!

Using Tetragon With Your Existing Kubernetes Container Network Interface

Learn how adopting Tetragon into your existing Kubernetes platform can enhance your security posture and integrate with security observability tooling.

Using Tetragon With Your Existing Kubernetes Container Network Interface
Dean Lewis

Industry insights you won’t delete. Delivered to your inbox weekly.