Back to blog

Can I Use Tetragon without Cilium? Yes!

Dean Lewis
Dean Lewis
Published: Updated: Isovalent

eBPF-based enforcement, visibility, and forensics for Kubernetes regardless of your CNI

Tetragon Mini Architecture

One of the common questions I see across social media when users learn of the capabilities of Tetragon is “Can I Use Tetragon without Cilium?”

The answer is Yes! You can use Tetragon regardless of the Container Network Interface (CNI) implemented. Tetragon provides eBPF-based enforcement, visibility, and forensics for Kubernetes regardless of your CNI.

In this blog post, I’m going to walk you through an example using a cluster with Calico Open-Source configured, and Tetragon installed to dive into process events.

First, for those of you who haven’t dived into the world of eBPF powered observability and security, let’s explain Tetragon.

What is Tetragon?

Tetragon is an Open-Source eBPF based Security Observability and Runtime enforcement platform, which was donated by Isovalent to the CNCF in 2022. Isovalent also provides an extended enterprise version of Tetragon.

Tetragon provides deep visibility without requiring you to change your applications and workloads, due to its smart in-kernel filtering and aggregation logic, built directly into the eBPF-based Kernel-Level collector.

Tetragon detects and is able to react to security-significant events, such as;

  • Process execution events
  • System call activity
  • I/O activity including network & file access

When installed into a Kubernetes environment, Tetragon is Kubernetes-aware. This means that Tetragon can match the Kubernetes metadata, such as namespaces, pods, labels and beyond, to the security and process event information collected against workloads running in the cluster.

tetragon

You can read a deep dive about Tetragon in our earlier blog post, or dive into the official Tetragon docs.

Getting Started and Installing Tetragon

For this walkthrough, we are going to follow the Tetragon Kubernetes getting-started guide, and then look at some more advanced use-cases. I have also recorded the below video which you can follow along with, as well as going through this blog post.

My environment is a Kind cluster setup with the below configuration, and Calico Open-Source installed without any modifications. Which means you will be able to follow along with this tutorial from your own local machine.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  # nodepinger
  - containerPort: 32042
    hostPort: 32042
  # goldpinger
  - containerPort: 32043
    hostPort: 32043
- role: worker
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  podSubnet: 192.168.0.0/16

We can validate the configuration below:

root@server:~# kubectl get nodes
NAME                 STATUS   ROLES           AGE     VERSION
kind-control-plane   Ready    control-plane   5m24s   v1.26.3
kind-worker          Ready    <none>          5m5s    v1.26.3
kind-worker2         Ready    <none>          5m5s    v1.26.3
kind-worker3         Ready    <none>          5m5s    v1.26.3
kind-worker4         Ready    <none>          5m5s    v1.26.3

root@server:~# kubectl -n calico-system rollout status ds/calico-node
daemon set "calico-node" successfully rolled out

To install Tetragon, we will do this using the available Helm Chart, the below commands configure the helm chart repository and install Tetragon, and wait for it to become ready:

$ helm repo add cilium https://helm.cilium.io --force-update
$ helm install tetragon cilium/tetragon -n kube-system
$ kubectl rollout status -n kube-system ds/tetragon -w

This will produce an output like the following example:

"cilium" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cilium" chart repository
Update Complete. ⎈Happy Helming!NAME: tetragon
LAST DEPLOYED: Thu Aug 17 11:41:01 2023
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
Waiting for daemon set "tetragon" rollout to finish: 0 of 5 updated pods are available...
Waiting for daemon set "tetragon" rollout to finish: 1 of 5 updated pods are available...
Waiting for daemon set "tetragon" rollout to finish: 2 of 5 updated pods are available...
Waiting for daemon set "tetragon" rollout to finish: 3 of 5 updated pods are available...
Waiting for daemon set "tetragon" rollout to finish: 4 of 5 updated pods are available...
daemon set "tetragon" successfully rolled out

To generate some security observability events in our environment, I am going to deploy the following Cilium Demo application, if you have used any of the Isovalent Hands-on-Labs to explore Cilium and Hubble in the past, you may be familiar with this demo application:

$ kubectl create namespace demo-app
$ kubectl create --namespace demo-app -f https://raw.githubusercontent.com/cilium/cilium/1.14.1/examples/minikube/http-sw-app.yaml

We can validate that all components of the demo application are configured, and pods are running with the following command:

$ kubectl get all --namespace demo-app

NAME                             READY   STATUS    RESTARTS   AGE
pod/deathstar-8464cdd4d9-f42kf   1/1     Running   0          28s
pod/deathstar-8464cdd4d9-qmrpj   1/1     Running   0          28s
pod/tiefighter                   1/1     Running   0          28s
pod/xwing                        1/1     Running   0          28s

NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/deathstar   ClusterIP   10.96.233.109   <none>        80/TCP    28s

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/deathstar   2/2     2            2           28s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/deathstar-8464cdd4d9   2         2         2       28s

Now, to install the tetra CLI, head over to the official documentation, as it lists the installation steps for each platform.

Below are the steps for installation to Linux amd64 architecture.

$ curl -L https://github.com/cilium/tetragon/releases/latest/download/tetra-linux-amd64.tar.gz | tar -xz
$ sudo mv tetra /usr/local/bin

Viewing process events from our workloads

Once we have the CLI installed, let’s generate some events in the xwing pod, using the below command we exec into the xwing pod to run some commands:

$ kubectl exec -n demo-app -it xwing -- sh
bash-4.3# whoami
root
bash-4.3# ls
bin                   etc                   lib                   media                 netperf-2.7.0         proc                  product_uuid          run                   sys                   usr
dev                   home                  linuxrc               mnt                   netperf-2.7.0.tar.gz  product_name          root                  sbin                  tmp                   var
bash-4.3# cat
^C
bash-4.3# cat product_uuid
c780787b-e0ac-45d0-bcc6-4f269606b8b9
bash-4.3# 

We can view the process events by running the following command, I recommend doing this in a separate terminal window, so that you can leave this command running to capture the output:

$ kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespaces demo-app --pods xwing
🚀 process demo-app/xwing /usr/bin/whoami                                 
💥 exit    demo-app/xwing /usr/bin/whoami  0                     
🚀 process demo-app/xwing /bin/ls                                         
💥 exit    demo-app/xwing /bin/ls  0                             
🚀 process demo-app/xwing /bin/cat                                        
💥 exit    demo-app/xwing /bin/cat  SIGINT                       
🚀 process demo-app/xwing /bin/cat product_uuid                           
💥 exit    demo-app/xwing /bin/cat product_uuid 0

This command gets the events which are recorded as logs in the “export-stdout” container that runs as part of the Tetragon services in your cluster. There are other methods to export and access this data collected by Tetragon, either by connecting to the Tetragon containers where the tetra CLI is available, or configuring logs to be forwarded to a SIEM solution.

By piping this output to the tetra cli, we can then use this tool to filter the logs by the Kubernetes metadata, in this example, namespace and pod.

For the full details, we can parse the JSON log file using the jq command, as per the below command example which looks for the pod “xwing”. You can find more information about process exec events in the official documentation.

$ kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | jq 'select(.process_exec.process.pod.name=="xwing" or .process_exit.process.pod.name=="xwing")'

A Short Introduction to Tracing Policies

Tetragon implements a user-configurable resource called a Tracing Policy. This resource allows users to trace events in the kernel and optionally define actions to take on matches. Policies are made up of two components: hook point and a selector.

  • A hook point, is the location Tetragon will trace the event from within the Kernal. Tetragon supports kprobes, tracepoints and uprobes currently. A hook point has an argument (specifies the arguments of the kernel function being traced and to collect in the trace output) and return value (type of the return argument to be captured and returned in the trace output).
  • Selectors allow in-kernel BPF filtering and the ability to take actions on matching events. Actions can range from logging the event, overriding a return value of a system call, to more complex examples, such as socket tracking.

In this tutorial, the example Tracing Policies use kprobes as the hook point. Kprobes allow Tetragon to break into the kernel routine and collect debugging and performance information non-disruptively, these are tied to the version of Kernel in-use. For more information regarding the use of kprobes, please visit the official documentation.

Network Observability without Cilium

One of the areas you would expect to need Cilium to be present in your Kubernetes Cluster, is when it comes to network observability using Tetragon. However again, Cilium is not needed for this use case either.

First let’s apply a tracing policy that captures TCP events from our workloads.

$ kubectl apply -f https://raw.githubusercontent.com/cilium/tetragon/main/examples/tracingpolicy/tcp-connect.yaml

Below is the YAML file, which is using Kprobes to monitor for “tcp_” based calls:

apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: connect
spec:
  kprobes:
  - args:
    - index: 0
      maxData: false
      returnCopy: false
      type: sock
    call: tcp_connect
    return: false
    syscall: false
  - args:
    - index: 0
      maxData: false
      returnCopy: false
      type: sock
    call: tcp_close
    return: false
    syscall: false
  - args:
    - index: 0
      maxData: false
      returnCopy: false
      type: sock
    - index: 2
      maxData: false
      returnCopy: false
      type: int
    call: tcp_sendmsg
    return: false
    syscall: false

Now let’s generate some traffic by connecting to the Isovalent website from the xwing pod.

$ kubectl exec -n demo-app -it xwing -- curl http://isovalent.com

Going back to our second terminal window, let’s look at the filtered tetragon logs for the xwing pod:

$ kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespaces demo-app --pods xwing
🚀 process demo-app/xwing /usr/bin/curl http://isovalent.com              
🔌 connect demo-app/xwing /usr/bin/curl tcp 192.168.110.132:49560 -> 199.232.194.22:80 
📤 sendmsg demo-app/xwing /usr/bin/curl tcp 192.168.110.132:49560 -> 199.232.194.22:80 bytes 77 
🧹 close   demo-app/xwing /usr/bin/curl tcp 192.168.110.132:49560 -> 199.232.194.22:80 
💥 exit    demo-app/xwing /usr/bin/curl http://isovalent.com 0   

In Tetragon Enterprise Edition, one feature is the ability to provide an FQDN for those returned IP addresses via DNS lookup, if you want to dive into the enterprise features further, provided at the end of this walkthrough are links to our free Hands-on-labs which take you into more advanced use-cases.

Check process capabilities and kernel namespaces access

Tetragon provides the ability to check the process capabilities and kernel namespace access. This information is useful for security teams to determine breaches whereby a process or Kubernetes pod has gained access to privileges or host namespaces which it should not have.

To demo this capability easily, you can follow these steps, however, we also have a dedicated, free hands-on lab that covers this subject in further detail, which I’ve linked at the end of this section.

First, we need to enable visibility into the capability and namespace changes by updating the configmap for Tetragon. This can be easily achieved using the helm upgrade command argument.

$ helm upgrade -n kube-system --set tetragon.enableProcessCred="true" --set tetragon.enableProcessNs="true" tetragon cilium/tetragon 
$ kubectl rollout restart daemonset tetragon -n kube-system
$ kubectl rollout status daemonset tetragon -n kube-system

To capture the additional data we are now collecting, and show the impact of containers with different privileges, let’s deploy a new pod into our demo-app namespace, which has its security context set to privileged.

Below is the manifest that we will apply to our environment, the command to create this pod will follow:

apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  hostPID: true
  hostNetwork: true
  containers:
  - name: test-pod
    image: docker.io/cilium/starwars:latest
    command: [ "sleep", "365d" ]
    securityContext:
      privileged: true

First let’s change the tetra command we are using to focus on the new pod name, then in a separate terminal window, apply the manifest to the cluster to create the pod.

$ kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespaces demo-app --pods test-pod

To create the pod:

$ kubectl apply -n demo-app -f https://raw.githubusercontent.com/cilium/tetragon/main/testdata/specs/testpod.yaml

Now let’s observe the process events captured for this pod, which we can see has the “CAP_SYS_ADMIN” capability. You can read more details about Linux capabilities here.

🚀 process demo-app/test-pod /usr/local/bin/mount-product-files /usr/local/bin/mount-product-files 🛑 CAP_SYS_ADMIN
🚀 process demo-app/test-pod /usr/bin/jq -r .bundle        🛑 CAP_SYS_ADMIN
💥 exit    demo-app/test-pod /usr/bin/jq -r .bundle 0 🛑 CAP_SYS_ADMIN
.....
💥 exit    demo-app/test-pod /usr/local/bin/mount-product-files /usr/local/bin/mount-product-files 0 🛑 CAP_SYS_ADMIN
🚀 process demo-app/test-pod /bin/sleep 365d               🛑 CAP_SYS_ADMIN

This was obviously a very quick overview of this capability, head over to the “Getting Started with Tetragon” lab, which takes you through an end-to-end container escape attack and how to detect this using Tetragon OSS.

Getting Started with Tetragon

Perform a real-world Kubernetes container escape and detect it with Tetragon!

Start Lab

Kubernetes namespace and pod label filtering

The examples so far have used tracingpolicies which are applied at a cluster level. However, in Kubernetes, you may have requirements to apply these policies to a smaller scope. Tetragon now supports filtering based on Kubernetes namespaces and pod labels, currently this feature is in beta.

First to enable this beta feature, we need to make a change to our Tetragon configuration using Helm.

$ helm upgrade -n kube-system --set tetragon.enablePolicyFilter="true" tetragon cilium/tetragon 
$ kubectl rollout restart daemonset tetragon -n kube-system
$ kubectl rollout status daemonset tetragon -n kube-system

We can confirm the changes we’ve made with the helm upgrade commands in this tutorial with the following command to view the config map tetragon-config.

$ kubectl get configmap tetragon-config  -n kube-system -o yaml
apiVersion: v1
data:
  enable-k8s-api: "true"
  enable-policy-filter: "true"
  enable-process-cred: "false"
  enable-process-ns: "false"
  export-allowlist: '{"event_set":["PROCESS_EXEC", "PROCESS_EXIT", "PROCESS_KPROBE",
    "PROCESS_UPROBE"]}'
  export-denylist: |-
    {"health_check":true}
    {"namespace":["", "cilium", "kube-system"]}
  export-file-compress: "false"
  export-file-max-backups: "5"
  export-file-max-size-mb: "10"
  export-filename: /var/run/cilium/tetragon/tetragon.log
  export-rate-limit: "-1"
  field-filters: '{}'
  gops-address: localhost:8118
  metrics-server: :2112
  process-cache-size: "65536"
  procfs: /procRoot
  server-address: localhost:54321
kind: ConfigMap
metadata:
  annotations:
    meta.helm.sh/release-name: tetragon
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2023-08-31T15:29:44Z"
  labels:
    app.kubernetes.io/instance: tetragon
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: tetragon
    helm.sh/chart: tetragon-0.10.0
  name: tetragon-config
  namespace: kube-system
  resourceVersion: "5982"
  uid: 8292997e-17c7-4073-86a1-1076447e5c59

Applying TracingPolicies to a Kubernetes Namespace

For this example, we are going to use one of the publicly available examples, which implements file level monitoring as a template to create a new policy for tracking activity (reads/writes) within our container. I cannot think of a better example than monitoring the /etc/passwd and /etc/shadow file.

Below is my TracingPolicyNamespaced, which modifies the above linked example to change:

  • Kind to TracingPolicyNamespaced
  • Add /etc/shadow to the list of files to be monitored

I have saved this file locally as file-monitoring-etc-files.yaml

apiVersion: cilium.io/v1alpha1
kind: TracingPolicyNamespaced
metadata:
  name: "file-monitoring-filtered"
spec:
  kprobes:
  - call: "security_file_permission"
    syscall: false
    return: true
    args:
    - index: 0
      type: "file" # (struct file *) used for getting the path
    - index: 1
      type: "int" # 0x04 is MAY_READ, 0x02 is MAY_WRITE
    returnArg:
      index: 0
      type: "int"
    returnArgAction: "Post"
    selectors:
    - matchArgs:      
      - index: 0
        operator: "Equal"
        values:
        - "/etc/passwd" # the files that we care
        - "/etc/shadow"
      - index: 1
        operator: "Equal"
        values:
        - "2" # MAY_WRITE
  - call: "security_mmap_file"
    syscall: false
    return: true
    args:
    - index: 0
      type: "file" # (struct file *) used for getting the path
    - index: 1
      type: "uint32" # the prot flags PROT_READ(0x01), PROT_WRITE(0x02), PROT_EXEC(0x04)
    - index: 2
      type: "nop" # the mmap flags (i.e. MAP_SHARED, ...)
    returnArg:
      index: 0
      type: "int"
    returnArgAction: "Post"
    selectors:
    - matchArgs:
      - index: 0
        operator: "Equal"
        values:
        - "/etc/passwd" # the files that we care
        - "/etc/shadow"
      - index: 1
        operator: "Mask"
        values:
        - "2" # PROT_WRITE
  - call: "security_path_truncate"
    syscall: false
    return: true
    args:
    - index: 0
      type: "path" # (struct path *) used for getting the path
    returnArg:
      index: 0
      type: "int"
    returnArgAction: "Post"
    selectors:
    - matchArgs:
      - index: 0
        operator: "Equal"
        values:
        - "/etc/passwd" # the files that we care
        - "/etc/shadow"

Now we will apply the following YAML file to our cluster, which is scoped against our demo-app namespace.

$ kubectl apply -n demo-app -f file-monitoring-etc-files.yaml

Now, using the kubectl exec command to execute against the “xwing” and “test-pod” (the privileged pod we deployed earlier) container and run a few commands to view and alter the files using different methods, we see that each method is recorded.

# Execute commands in the test-pod
$ kubectl exec -n demo-app -it test-pod  -- sh
/bin/echo "cilium" >> /etc/passwd
exit

# Excute commands in the xwing pod
$ kubectl exec -n demo-app -it xwing -- sh
/usr/bin/printf "ebpf" >> /etc/passwd
exit

Now when we check our events, I am using the tetra command to capture all events in the namespace demo-app and have removed the --pods argument used in the earlier examples.

$ kubectl logs -n kube-system -l app.kubernetes.io/name=tetragon -c export-stdout -f | tetra getevents -o compact --namespaces demo-app
🚀 process demo-app/test-pod /bin/sh                                      
🚀 process demo-app/test-pod /bin/echo cilium                             
📝 write   demo-app/test-pod /bin/echo /etc/passwd                        
📝 write   demo-app/test-pod /bin/echo /etc/passwd                        
💥 exit    demo-app/test-pod /bin/echo cilium 0                  
💥 exit    demo-app/test-pod /bin/sh  0                          
🚀 process demo-app/xwing /bin/sh                                         
🚀 process demo-app/xwing /usr/bin/printf ebpf                            
📝 write   demo-app/xwing /usr/bin/printf /etc/passwd                     
📝 write   demo-app/xwing /usr/bin/printf /etc/passwd                     
💥 exit    demo-app/xwing /usr/bin/printf ebpf 0                 
💥 exit    demo-app/xwing /bin/sh  0  

Applying TracingPolicies to specific pods based on label selectors

The last example is applying a TracingPolicy based on Kubernetes labels. For this, we will create a policy that stops writes to files inside of the /tmp/ folder for any pod which has the label app=test.

First let’s create the file deny-write-tmp.yaml with the following content:

apiVersion: cilium.io/v1alpha1
kind: TracingPolicyNamespaced
metadata:
  name: "deny-write-tmp"
spec:
  podSelector:
    matchLabels:
      app: "test"
  kprobes:
  - call: "fd_install"
    syscall: false
    return: false
    args:
    - index: 0
      type: int
    - index: 1
      type: "file"
    selectors:
    - matchPIDs:
      - operator: NotIn
        followForks: true
        isNamespacePID: true 
        values:
        - 1
      matchArgs:
      - index: 1
        operator: "Prefix"
        values:
        - "/tmp/"
      matchActions:
      - action: FollowFD
        argFd: 0
        argName: 1
  - call: "__x64_sys_close"
    syscall: true
    args:
    - index: 0
      type: "int"
    selectors:
    - matchActions:
      - action: UnfollowFD
        argFd: 0
        argName: 0
  - call: "__x64_sys_read"
    syscall: true
    args:
    - index: 0
      type: "fd"
    - index: 1
      type: "char_buf"
      returnCopy: true
    - index: 2
      type: "size_t"
  - call: "__x64_sys_write"
    syscall: true
    args:
    - index: 0
      type: "fd"
    - index: 1
      type: "char_buf"
      sizeArgIndex: 3
    - index: 2
      type: "size_t"
    selectors: 
    - matchActions:
        - action: Sigkill

When applying this to the ‘demo-app’ namespace, take note that the ‘Kind’ field in the above YAML file is set to ‘TracingPolicyNamespaced’ because the changes are specific to that Kubernetes namespace.

$ kubectl apply -n demo-app -f deny-write-tmp.yaml

Now, let’s apply the chosen label to the privileged pod I deployed earlier.

$ kubectl label -n demo-app pod/test-pod app=test

And to test the policy, we should expect that if I try to write any changes to a file located in the /tmp/ directory, the process is exited.

$ kubectl exec -n demo-app -it test-pod  -- sh

# Let's create a file in the /tmp/ directory
touch /tmp/newfile.txt

# Now edit and save the file using vi
vi /tmp/newfile.txt

# You will see the following when you try to save
:wq!Killed

It’s a little hard to capture this in written form with the outputs expected, so instead I’ve captured it in a video below, at the end showing the expected output from the logs as well.

Below are the Tetra events.

🚀 process demo-app/test-pod /bin/sh                                      
🚀 process demo-app/test-pod /bin/touch /tmp/newfile.txt                  
💥 exit    demo-app/test-pod /bin/touch /tmp/newfile.txt 0       
🚀 process demo-app/test-pod /usr/bin/vi /tmp/newfile.txt                 
📬 open    demo-app/test-pod /usr/bin/vi /tmp/newfile.txt                 
📪 close   demo-app/test-pod /usr/bin/vi                                           
📬 open    demo-app/test-pod /usr/bin/vi /tmp/newfile.txt                 
📝 write   demo-app/test-pod /usr/bin/vi /tmp/newfile.txt 5 bytes 
💥 exit    demo-app/test-pod /usr/bin/vi /tmp/newfile.txt SIGKILL 

Where can I learn more?

You might have noticed that there wasn’t any mention of the underlying CNI throughout this tutorial, and that’s the reason why I created this blog post! Tetragon doesn’t rely on the underlying CNI at all to bring to your platform these powerful eBPF based observability and enforcement features.

Earlier in the blog post, I gave you the link to our free hands-on lab which takes you through the Open-Source Tetragon offering in further detail. So why not jump into the following labs also getting hands-on with the extended enterprise version of Tetragon with “Isovalent Enterprise for Cilium: Security Visibility“, discover how to gain insights into TLS with “Isovalent Enterprise for Cilium: TLS Visibility” and you can follow up for a live demo from our team here.

Isovalent Enterprise for Cilium: Security Visibility

Identify Late Process Execution, trace lateral movement and data exfiltration with Tetragon Enterprise and Hubble Enterprise.

Start Lab

Isovalent Enterprise for Cilium: TLS Visibility

Mechanisms like TLS ensure that data is encrypted in transit but verifying that a TLS configuration is secure is a challenge. In this lab use Tetragon Enterprise to gain visibility into TLS.

Start Lab

And for those of you who prefer video content to watch along to, I have you covered there as well. A quick start to Tetragon from our security product manager, Natália Ivánkó, and how to use Tetragon to discover container escapes.

Dean Lewis
AuthorDean LewisSenior Technical Marketing Engineer

Related

Tetragon – eBPF-based Security Observability & Runtime Enforcement

Introduction to Tetragon - eBPF-based Security Observability & Runtime Enforcement

Quickstart into Tetragon

[21:27] In this Isovalent Tech Talk,Natália Réka Ivánkó walks through what Tetragon is, how it can be used for container runtime observability and security and goes through a cool demo.

Natália Réka Ivánkó

Security Observability with eBPF

Get our security observability report, covering signals to monitor and how to develop prevention

Natália Réka Ivánkó
Jed Salazar

Industry insights you won’t delete. Delivered to your inbox weekly.