Back to blog

Enabling Multicast Securely With Ipsec in the Cloud Native Landscape With Cilium

Amit Gupta
Amit Gupta
Published: Updated: Isovalent
Enabling Multicast securely

IP multicast is a bandwidth-conserving technology that reduces traffic by simultaneously delivering a single stream of information to potentially thousands of corporate recipients and homes. Multicast has found its use case with traditional networks for a long time, including power systems, emergency response, and audio dispatch systems, to newer uses by the finance, media, and broadcasting worlds. Despite the evident efficiency benefits provided by multicast, the technology was never natively supported by cloud providers. Cloud migration projects halt when this roadblock is discovered, spending time and money to create expensive workarounds. Developers don’t realize multicast in the cloud is available, assuming it remains a dream. This blog post will walk you through enabling Multicast in the cloud with Isovalent Enterprise for Cilium and enabling traffic encryption between pods across nodes with IPsec.

What is Multicast?

Traditional IP communication allows a host to send packets to a single host (unicast transmission) or all hosts (broadcast transmission). IP multicast provides a third possibility: sending a message from a single source to selected multiple destinations across a Layer 3 network in one data stream. You can read more about the evolution of multicast in a featured blog covered by my colleague Nico Vibert.

Multicast relies on a standard called IGMP (Internet Group Management Protocol), which allows many applications, or multicast groups, to share a single IP address and receive the same data simultaneously. (For instance, IGMP is used in gaming when multiple gamers use the network simultaneously to play together.)  

How does multicast work?

To join the multicast group, an application subscribes to the data via an IGMP join. Whenever a packet from that multicast group comes to the receiver, it copies and sends it to them. Simultaneously, it sends the copy to the other, say, N members or endpoints that are also part of the multicast group.  If the endpoints aren’t interested in those packets, they leave the group using an IGMP leave message.

What IP address schema does multicast use?

IGMP uses IP addresses set aside for multicasting. Multicast IP addresses are in the range between 224.0.0.0 and 239.255.255.255. Each multicast group shares one of these IP addresses. When a receiver receives packets directed at the shared IP address, it duplicates them, sending copies to all multicast group members.

IGMP multicast groups can change at any time. A device can send an IGMP “join group” or “leave group” message anytime.

The need for Multicast- Can it aid financial services?

Financial services have multiple dependencies like trading, pricing, and exchange services, which have multiple endpoints that need to get data simultaneously, and thus, multicast comes to the aid here. These endpoints could also be producers of data (pushing out multicast data to other consumers) or the consumers of the data (receiving data from other producers). This increases complexity in a cloud native landscape, which we will cover in this blog to entangle this complexity in a simple, subtle, and yet intuitive way.

What is Isovalent Enterprise for Cilium?

“Cilium is the next generation for container networking.”

Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects CiliumHubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.

Why Isovalent Enterprise for Cilium?

For enterprise customers requiring support and usage of Advanced NetworkingSecurity, and Observability features, “Isovalent Enterprise for Cilium” is recommended with the following benefits:

  • Advanced network policy: Isovalent Cilium Enterprise provides advanced network policy capabilities, including DNS-aware policy, L7 policy, and deny policy, enabling fine-grained control over network traffic for micro-segmentation and improved security.
  • Hubble flow observability + User Interface: Isovalent Cilium Enterprise Hubble observability feature provides real-time network traffic flow, policy visualization, and a powerful User Interface for easy troubleshooting and network management.
  • Multi-cluster connectivity via Cluster Mesh: Isovalent Cilium Enterprise provides seamless networking and security across multiple clouds, including public cloud providers like AWS, Azure, and Google Cloud Platform, as well as on-premises environments.
  • Advanced Security Capabilities via Tetragon: Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats. Built on eBPF, Tetragon can easily scale to meet the needs of the most demanding cloud-native environments.
  • Service Mesh: Isovalent Cilium Enterprise provides sidecar-free, seamless service-to-service communication and advanced load balancing, making deploying and managing complex microservices architectures easy.
  • Enterprise-grade support: Isovalent Cilium Enterprise includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that any issues are resolved promptly and efficiently. Additionally, professional services help organizations deploy and manage Cilium in production environments.

Why Multicast on Isovalent Enterprise for Cilium?

  • Kubernetes CRDs is a powerful extension to the Kubernetes API that expands Kubernetes beyond its core resource types. CRDs allow administrators to introduce new, application-specific resources into Kubernetes clusters, tailoring the platform to unique requirements. Multicast is enabled in Isovalent Enterprise for Cilium using CRDs.
  • The Multicast datapath is fully available for both Cilium OSS and Isovalent Enterprise for Cilium users, with the enterprise offering a CRD-based control plane to configure and manage multicast groups easily.

Good to know before you start

  • The maximum number of multicast groups supported is 1024, and the maximum number of subscribers per group on each node is 1024.
  • Cilium supports both IGMPv2 and IGMPv3 join and leave messages.
  • Current limitations with Multicast running on a Kubernetes cluster with Cilium:
    • This feature works in tunnel routing mode and is currently restricted to the VxLAN tunnel.
    • Only IPv4 multicast is supported.
    • Only IPsec transparent encryption is supported.

Pre-Requisites

The following prerequisites must be considered before you proceed with this tutorial.

  • An up-and-running Kubernetes cluster. If you don’t have one, you can create a cluster using one of these options:
  • The following dependencies should be installed:
  • Ensure that the kernel version on the Kubernetes nodes is >= 5.13

How can you achieve Multicast functionality with Cilium?

Create Kubernetes Cluster(s)

You can enable multicast on any Kubernetes cluster distribution as it applies to your use case. These clusters can be created from your local machine or a VM in the respective resource group/VPC/VNet in the respective cloud distribution. In this tutorial, we will be enabling Multicast on the following distribution:

Create an AKS cluster with BYOCNI.

Set the subscription

Choose the subscription you want to use if you have multiple Azure subscriptions.

  • Replace SubscriptionName with your subscription name.
  • You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName

AKS Resource Group Creation

Create a Resource Group

clusterName="mcastdemo"
resourceGroup="mcastdemo"
location="mcastdemo"

az group create --name $resourceGroup --location $location

AKS Cluster creation

Pass the --network-plugin parameter with the parameter value of none.

az aks create -l $location -g $resourceGroup -n $clusterName --network-plugin none --kubernetes-version 1.29 --node-count 3

Set the Kubernetes Context

Log in to the Azure portal, browse Kubernetes Services>, select the respective Kubernetes service created (AKS Cluster), and click connect. This will help you connect to your AKS cluster and set the respective Kubernetes context.

az aks get-credentials $resourceGroup --$clusterName

Cluster status check

Check the status of the nodes and make sure they are in a “Ready” state.

kubectl get nodes -o wide

NAME                                STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-18707500-vmss000000   Ready    <none>   8d    v1.29.4   10.224.0.5    <none>        Ubuntu 22.04.4 LTS   5.15.0-1064-azure   containerd://1.7.15-1
aks-nodepool1-18707500-vmss000001   Ready    <none>   8d    v1.29.4   10.224.0.4    <none>        Ubuntu 22.04.4 LTS   5.15.0-1064-azure   containerd://1.7.15-1
aks-nodepool1-18707500-vmss000002   Ready    <none>   8d    v1.29.4   10.224.0.6    <none>        Ubuntu 22.04.4 LTS   5.15.0-1064-azure   containerd://1.7.15-1

Install Isovalent Enteprise for Cilium

Validate Cilium version

Check the version of cilium with cilium version:

kubectl -n kube-system exec ds/cilium -- cilium status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.29 (v1.29.4) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideEnvoyConfig", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumEnvoyConfig", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Secrets", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    False   [eth0   10.224.0.5 fe80::20d:3aff:fecc:462c (Direct Routing)]
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            none
Cilium:                  Ok   1.15.4-cee.1 (v1.15.4-cee.1-2757ca6d)
NodeMonitor:             Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok
IPAM:                    IPv4: 8/254 allocated from 10.1.0.0/24,
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       47/47 healthy
Proxy Status:            OK, ip 10.1.0.7, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 24.57   Metrics: Disabled
Encryption:              Disabled
Cluster health:          3/3 reachable   (2024-06-13T13:47:39Z)
Modules Health:          Stopped(0) Degraded(0) OK(12) Unknown(6)

Cilium Health Check

cilium-health is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking connectivity. You can check node-to-node health with cilium-health status:

kubectl -n kube-system exec ds/cilium -- cilium-health status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2024-06-13T13:49:39Z
Nodes:
  aks-nodepool1-18707500-vmss000000 (localhost):
    Host connectivity to 10.224.0.5:
      ICMP to stack:   OK, RTT=320.405µs
      HTTP to agent:   OK, RTT=294.105µs
    Endpoint connectivity to 10.1.0.70:
      ICMP to stack:   OK, RTT=362.906µs
      HTTP to agent:   OK, RTT=315.705µs
  aks-nodepool1-18707500-vmss000001:
    Host connectivity to 10.224.0.4:
      ICMP to stack:   OK, RTT=1.157719ms
      HTTP to agent:   OK, RTT=631.911µs
    Endpoint connectivity to 10.1.1.82:
      ICMP to stack:   OK, RTT=1.144019ms
      HTTP to agent:   OK, RTT=511.709µs
  aks-nodepool1-18707500-vmss000002:
    Host connectivity to 10.224.0.6:
      ICMP to stack:   OK, RTT=1.174819ms
      HTTP to agent:   OK, RTT=1.324522ms
    Endpoint connectivity to 10.1.2.102:
      ICMP to stack:   OK, RTT=1.150019ms
      HTTP to agent:   OK, RTT=2.47584ms

How can we test Multicast with Cilium?

We will go over creating some basic application(s) and manifests.

Configuration

  • Before pods can join multicast groups, IsovalentMulticastGroup Resources must be configured to define which groups are enabled in the cluster.
  • For this tutorial, groups 255.0.0.11255.0.0.12255.0.0.13 are enabled in the cluster. Pods can then start joining and sending traffic to these groups.
apiVersion: isovalent.com/v1alpha1
kind: IsovalentMulticastGroup
metadata:
     name: multicast-groups
spec:
    groupAddrs:
       - "225.0.0.11"
       - "225.0.0.12"
       - "225.0.0.13"
  • Apply the Custom Resource.
kubectl create -f isovalent-multicast-group.yaml
  • Check if the Custom Resource has been applied.
kubectl get crd -A | grep multicast

isovalentmulticastgroups.isovalent.com           2024-06-05T14:13:29Z
isovalentmulticastnodes.isovalent.com            2024-06-05T14:13:28Z

Create sample deployment(s)

  • In this tutorial, we will create four pods on a three-node Kubernetes cluster, which will participate in joining the multicast group and sending multicast packets. Pod deployment is done using the following:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: netshoot
spec:
  replicas: 4
  selector:
    matchLabels:
      app: netshoot
  template:
    metadata:
      labels:
        app: netshoot
    spec:
      terminationGracePeriodSeconds: 1
      containers:
      - name: netshoot
        image: nicolaka/netshoot:latest
        imagePullPolicy: Always
        command: ["sleep", "infinite"]
  • Deploy the Deployment using:
kubectl create -f netshoot-ds.yaml
  • Verify the status of the deployment.
kubectl get nodes -o wide

NAME                                STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-nodepool1-18707500-vmss000000   Ready    <none>   12d   v1.29.4   10.224.0.5    <none>        Ubuntu 22.04.4 LTS   5.15.0-1064-azure   containerd://1.7.15-1
aks-nodepool1-18707500-vmss000001   Ready    <none>   12d   v1.29.4   10.224.0.4    <none>        Ubuntu 22.04.4 LTS   5.15.0-1064-azure   containerd://1.7.15-1
aks-nodepool1-18707500-vmss000002   Ready    <none>   12d   v1.29.4   10.224.0.6    <none>        Ubuntu 22.04.4 LTS   5.15.0-1064-azure   containerd://1.7.15-1

kubectl get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName --all-namespaces | grep netshoot

netshoot-665b547d78-kgqnc                                Running   aks-nodepool1-18707500-vmss000000
netshoot-665b547d78-st4m6                                Running   aks-nodepool1-18707500-vmss000001
netshoot-665b547d78-wqhvh                                Running   aks-nodepool1-18707500-vmss000001
netshoot-665b547d78-zwhvh                                Running   aks-nodepool1-18707500-vmss000002

Multicast Group Validation

  • Validate the configured IsovalentMulticastGroup Resource and inspect its contents:
kubectl get isovalentmulticastgroups -o yaml

apiVersion: v1
items:
- apiVersion: isovalent.com/v1alpha1
  kind: IsovalentMulticastGroup
  metadata:
    creationTimestamp: "2024-06-05T14:15:32Z"
    generation: 1
    name: multicast-groups
    resourceVersion: "22719"
    uid: e4ff9973-69de-4820-aac2-b5f49e441dce
  spec:
    groupAddrs:
    - 225.0.0.11
    - 225.0.0.12
    - 225.0.0.13
kind: List
metadata:
  resourceVersion: ""
  • Each cilium agent pod contains the CLI cilium-dbg, which can be used to inspect multicast BPF maps. This command will list all the groups configured on the node.
kubectl -n kube-system exec -it ds/cilium -c cilium-agent -- cilium-dbg bpf multicast group list

Group Address
225.0.0.11
225.0.0.12
225.0.0.13

Add multicast subscribers

  • Pods that want to join multicast groups must send out IGMP join messages for specific group addresses. In this tutorial, pods netshoot-665b547d78-kgqnc, netshoot-665b547d78-wqhvh & netshoot-665b547d78-zwhvh join multicast group 225.0.0.11.
  • Using socat; which receives multicast packets addressed to 225.0.0.11 and forks a child process for each.
    • The child processes may each send one or more reply packets to the particular sender. Run this command on several hosts, and they will all respond in parallel.
kubectl exec -it netshoot-665b547d78-kgqnc  -- bash
netshoot-665b547d78-kgqnc:~#  socat UDP4-RECVFROM:6666,ip-add-membership=225.0.0.11:0.0.0.0,fork -

kubectl exec -it netshoot-665b547d78-wqhvh -- bash
netshoot-665b547d78-wqhvh:~#  socat UDP4-RECVFROM:6666,ip-add-membership=225.0.0.11:0.0.0.0,fork -

kubectl exec -it netshoot-665b547d78-zwhvh  -- bash
netshoot-665b547d78-zwhvh:~#  socat UDP4-RECVFROM:6666,ip-add-membership=225.0.0.11:0.0.0.0,fork -

Validate multicast subscribers

  • You can validate that subscribers are tracked in BPF maps using the command, cilium-dbg bpf multicast subscriber list all. Note that the subscriber list command displays information from the perspective of a given node.
kubectl -n kube-system exec -it ds/cilium -c cilium-agent -- cilium-dbg bpf multicast subscriber list all

Group           Subscriber      Type
225.0.0.11      10.1.0.247      Local Endpoint
                10.224.0.4      Remote Node
                10.224.0.6      Remote Node
225.0.0.12
225.0.0.13
  • In this output, one local pod subscribes to a multicast group, and two other nodes have pods joining the group 225.0.0.11.

Generating Multicast Traffic

  • Multicast Traffic can be generated from one of the Netshoot pods; all subscribers should receive it.
  • When Multicast Traffic is sent from netshoot-665b547d78-kgqnc Pod:
kubectl exec -it netshoot-665b547d78-kgqnc  -- bash
netshoot-665b547d78-kgqnc:~#  echo "hello, multicast!" | socat -u - UDP4-DATAGRAM:225.0.0.11:6666
  • Multicast Traffic is received by all three pods.
netshoot-665b547d78-st4m6:~#  socat UDP4-RECVFROM:6666,ip-add-membership=225.0.0.11:0.0.0.0,fork -
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!

netshoot-665b547d78-wqhvh:~#  socat UDP4-RECVFROM:6666,ip-add-membership=225.0.0.11:0.0.0.0,fork -
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!

netshoot-665b547d78-zwhvh:~#  socat UDP4-RECVFROM:6666,ip-add-membership=225.0.0.11:0.0.0.0,fork -
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!
hello, multicast!

How can I verify Multicast Traffic?

  • One common question is how to verify traffic for a protocol type you set up the testbed for. In this case, for multicast, you can verify IGMP join message being sent from the netshoot-* pods.
  • tcpdump needs to be installed on the netshoot pods via apt-get install tcpdump
kubectl exec -it netshoot-665b547d78-kgqnc  -- bash

netshoot-665b547d78-kgqnc:~# tcpdump -nni eth0
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
07:13:05.805199 IP 10.1.0.247.48563 > 225.0.0.11.6666: UDP, length 18
15:25:16.282821 IP 10.1.2.73 > 224.0.0.22: igmp v3 report, 1 group record(s)
15:25:17.038823 IP 10.1.2.73 > 224.0.0.22: igmp v3 report, 1 group record(s)
07:13:09.544662 IP 10.1.0.247.60078 > 225.0.0.11.6666: UDP, length 18
07:13:10.436808 IP 10.1.0.247.41324 > 225.0.0.11.6666: UDP, length 18

How can I observe Multicast Traffic?

In an enterprise environment, the key is to ensure that the intended traffic is closely examined and anomalies, if detected, are worked upon. To observe Multicast Traffic, you can install Isovalent Enterprise for Tetragon and look at the respective metrics provided by ServiceMonitor to Grafana.

What is Tetragon?

Tetragon provides powerful security observability and a real-time runtime enforcement platform. The creators of Cilium have built Tetragon and brought the full power of eBPF to the security world.

Tetragon helps platform and security teams solve the following:

Security Observability:

  • Observing application and system behavior such as process, syscall, file, and network activity
  • Tracing namespace, privilege, and capability escalations
  • File integrity monitoring

Runtime Enforcement:

  • Application of security policies to limit the privileges of applications and processes on a system (system calls, file access, network, kprobes)

How can I install Isovalent Enterprise for Tetragon?

To obtain the helm values to install Isovalent Enterprise for Tetragon and access to Enterprise documentation, reach out to our sales team and support@isovalent.com

Integrate Prometheus & Grafana with Tetragon

  • Install Prometheus and Grafana.
  • To access the Enterprise documentation for integrating Tetragon with Prometheus and Grafana, contact our sales team and support@isovalent.com.
  • Integrated multicast and UDP socket dashboards can be obtained by contacting our sales team and support@isovalent.com.
  • Apply the UDP & Interface parser tracing policies from the above links.
    • Enabling the UDP sensor provides eBPF observability for the following features:
      • UDP TX/RX Traffic
      • UDP TX/RX Segments
      • UDP TX/RX Bursts
      • UDP Latency Histogram
      • UDP Multicast TX/RX Traffic
      • UDP Multicast TX/RX Segments
      • UDP Multicast TX/RX Bursts
    • The UDP parser provides UDP state visibility into your Kubernetes cluster for your microservices.
    • We can observe the network metrics with Tetragon and trace the pod binary/process originating.
  • To access the Grafana dashboard, forward the traffic to your local machine or a respective machine from where access is available.
    • Grafana can also be accessed via a service of the type LoadBalancer.
kubectl -n monitoring port-forward service/prometheus-grafana --address 0.0.0.0 --address :: 80:80
  • Log in to the Grafana dashboard with the requisite credentials, browse to dashboards, and click on the dashboard named “Tetragon/ UDP Throughput/ Socket.”
  • Visualize metrics for a specific multicast group.

How do I check Tetragon values and metrics if I don’t have access to Grafana?

  • You can also look at the raw stats by doing a simple curl to the Tetragon service. Look for the raw stats and then export them to your chosen SIEM. As an example:
  • In the scenario above, the Tetragon service is running on the following:
kubectl get svc -n kube-system | grep tetragon
tetragon                                             ClusterIP      10.0.134.122   <none>         2112/TCP                       12d
  • Login to one of the workloads from where the Tetragon service is accessible:
netshoot-665b547d78-zwhvh:~# curl 10.0.134.122:2112/metrics | grep -i tetragon_socket_stats_udp_mcast_

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0# HELP tetragon_socket_stats_udp_mcast_consume_misses_total UDP socket consume packet misses
# TYPE tetragon_socket_stats_udp_mcast_consume_misses_total counter
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-rmc55",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-w5tgb",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-whjsk",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-dgdlz",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-fd95g",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-xwb2d",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_consume_misses_total{binary="/usr/bin/socat",dstmcast="225.0.0.13",dstnamespace="",dstpod="",dstworkload="",namespace="default",pod="mcast-snd-3-6878bb7589-mm5p9",srcmcast="10.1.1.187",workload="mcast-snd-3"} 0
# HELP tetragon_socket_stats_udp_mcast_drops_total UDP socket drops statistics
# TYPE tetragon_socket_stats_udp_mcast_drops_total counter
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-rmc55",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-w5tgb",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-whjsk",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-dgdlz",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-fd95g",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-xwb2d",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_drops_total{binary="/usr/bin/socat",dstmcast="225.0.0.13",dstnamespace="",dstpod="",dstworkload="",namespace="default",pod="mcast-snd-3-6878bb7589-mm5p9",srcmcast="10.1.1.187",workload="mcast-snd-3"} 0
# HELP tetragon_socket_stats_udp_mcast_rxbytes_total UDP socket RX bytes statistics
# TYPE tetragon_socket_stats_udp_mcast_rxbytes_total counter
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-rmc55",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 695450
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-w5tgb",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 695520
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-whjsk",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 695555
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 126
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 126
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 126
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 252
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 234
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-dgdlz",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 695275
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-fd95g",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 695135
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-xwb2d",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 695205
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 694680
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 694750
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 694645
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 101745
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 101535
tetragon_socket_stats_udp_mcast_rxbytes_total{binary="/usr/bin/socat",dstmcast="225.0.0.13",dstnamespace="",dstpod="",dstworkload="",namespace="default",pod="mcast-snd-3-6878bb7589-mm5p9",srcmcast="10.1.1.187",workload="mcast-snd-3"} 0
# HELP tetragon_socket_stats_udp_mcast_rxsegs_total UDP socket RX segment statistics
# TYPE tetragon_socket_stats_udp_mcast_rxsegs_total counter
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-rmc55",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 19870
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-w5tgb",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 19872
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-whjsk",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 19873
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 7
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 7
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 7
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 14
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 13
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-dgdlz",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 19865
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-fd95g",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 19861
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-xwb2d",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 19863
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 19848
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 19850
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 19847
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 2907
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 2901
tetragon_socket_stats_udp_mcast_rxsegs_total{binary="/usr/bin/socat",dstmcast="225.0.0.13",dstnamespace="",dstpod="",dstworkload="",namespace="default",pod="mcast-snd-3-6878bb7589-mm5p9",srcmcast="10.1.1.187",workload="mcast-snd-3"} 0
# HELP tetragon_socket_stats_udp_mcast_txbytes_total UDP socket TX bytes statistics
# TYPE tetragon_socket_stats_udp_mcast_txbytes_total counter
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-rmc55",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-w5tgb",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-whjsk",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-dgdlz",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-fd95g",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-xwb2d",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_txbytes_total{binary="/usr/bin/socat",dstmcast="225.0.0.13",dstnamespace="",dstpod="",dstworkload="",namespace="default",pod="mcast-snd-3-6878bb7589-mm5p9",srcmcast="10.1.1.187",workload="mcast-snd-3"} 719810
# HELP tetragon_socket_stats_udp_mcast_txsegs_total UDP socket TX segment statistics
# TYPE tetragon_socket_stats_udp_mcast_txsegs_total counter
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-rmc55",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-w5tgb",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.181",dstnamespace="default",dstpod="mcast-snd-2-779959d569-x9hxt",dstworkload="mcast-snd-2",namespace="default",pod="mcast-rcv-2-7978487546-whjsk",srcmcast="225.0.0.12",workload="mcast-rcv-2"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.0.247",dstnamespace="default",dstpod="netshoot-665b547d78-kgqnc",dstworkload="netshoot",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-dgdlz",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-fd95g",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.1.187",dstnamespace="default",dstpod="mcast-snd-3-6878bb7589-mm5p9",dstworkload="mcast-snd-3",namespace="default",pod="mcast-rcv-3-756cf6695f-xwb2d",srcmcast="225.0.0.13",workload="mcast-rcv-3"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-kk7v6",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-msrvj",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="mcast-rcv-1-7674cbc6df-vrc4z",srcmcast="225.0.0.11",workload="mcast-rcv-1"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-st4m6",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="10.1.2.220",dstnamespace="default",dstpod="mcast-snd-1-755b7c8597-gvlhm",dstworkload="mcast-snd-1",namespace="default",pod="netshoot-665b547d78-wqhvh",srcmcast="225.0.0.11",workload="netshoot"} 0
tetragon_socket_stats_udp_mcast_txsegs_total{binary="/usr/bin/socat",dstmcast="225.0.0.13",dstnamespace="",dstpod="",dstworkload="",namespace="default",pod="mcast-snd-3-6878bb7589-mm5p9",srcmcast="10.1.1.187",workload="mcast-snd-3"} 20566
100  301k    0  301k    0     0  15.7M      0 --:--:-- --:--:-- --:--:-- 16.3M

Can I encrypt Multicast Traffic?

While developing this feature, some customers asked us to encrypt Multicast Traffic. We are happy to announce that you can use IPsec transparent encryption to enable traffic encryption between pods across nodes.

Configuration

  • Sysctl– The following sysctl must be configured on the nodes to allow encrypted Multicast Traffic to be forwarded correctly:
net.ipv4.conf.all.accept_local=1
  • IPsec secret– A Kubernetes secret should consist of one key-value pair where the key is the file’s name to be mounted as a volume in cilium-agent pods. Configure an IPsec secret using the following command:
kubectl create -n kube-system secret generic cilium-ipsec-keys --from-literal=keys="3+ rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64)) 128"
  • The secret can be listed by kubectl-n kube-system get secrets and will be listed as cilium-ipsec-keys.
kubectl -n kube-system get secrets cilium-ipsec-keys

NAME                TYPE     DATA   AGE
cilium-ipsec-keys   Opaque   1      12d
  • In addition to the IPsec configuration that encrypts unicast traffic between pod to remote node pods, you can set the enable-ipsec-encrypted-overlay config option to true for encrypting Multicast Traffic between pod to remote node pods.

How can I install/upgrade my cluster with Multicast and IPsec?

You can either upgrade your existing cluster running Multicast with IPsec or create a greenfield cluster with Multicast + IPsec.

To obtain the helm values to install IPsec with Multicast and access to Enterprise documentation, reach out to our sales team and support@isovalent.com

How can I validate if the cluster has been enabled for encryption?

You can ensure that IPsec has been enabled as the encryption type by:

kubectl -n kube-system exec ds/cilium -- cilium status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.29 (v1.29.4) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideEnvoyConfig", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumEnvoyConfig", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Secrets", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    False   [eth0   10.224.0.6 fe80::20d:3aff:fef2:9f79 (Direct Routing)]
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            none
Cilium:                  Ok   1.15.4-cee.1 (v1.15.4-cee.1-2757ca6d)
NodeMonitor:             Listening for events on 2 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok
IPAM:                    IPv4: 19/254 allocated from 10.1.4.0/24,
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       101/101 healthy
Proxy Status:            OK, ip 10.1.4.51, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 8.85   Metrics: Disabled
Encryption:              IPsec
Cluster health:          3/3 reachable   (2024-06-18T11:23:55Z)
Modules Health:          Stopped(0) Degraded(0) OK(13) Unknown(5)

Create sample application(s)

You can create a client and server application that are pinned on two distinct Kubernetes nodes to test node-to-node IPsec encryption by using the following manifests:

  • Apply the client-side manifest. The client does a “wget” towards the server every 2 seconds.
---
apiVersion: v1
kind: Pod
metadata:
  name: client
  labels:
    post: aks-ipsec-cilium
    name: client
spec:
  containers:
    - name: client
      image: busybox
      command: ["watch", "wget", "server"]
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: "kubernetes.io/hostname"
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        post: aks-ipsec-cilium
kubectl apply -f client.yaml
  • Apply the server-side manifest.
---
apiVersion: v1
kind: Pod
metadata:
  name: server
  labels:
    post: aks-ipsec-cilium
    name: server
spec:
  containers:
    - name: server
      image: nginx
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: "kubernetes.io/hostname"
    whenUnsatisfiable: DoNotSchedule
    labelSelector:
      matchLabels:
        post: aks-ipsec-cilium
---
apiVersion: v1
kind: Service
metadata:
  name: server
spec:
  selector:
    name: server
  ports:
  - port: 80
kubectl apply -f server.yaml

How can I check that the traffic is encrypted and sent over the VxLAN tunnel?

  • Log in to the node via a shell session and install tcpdump on the nodes where the server pod has been deployed.
apt-get update && apt install tcpdump -y
  • Check the status of the deployed pods.
kubectl get pods -o wide | grep server
server                         1/1     Running                  0          10m     10.1.2.235   aks-nodepool1-20058884-vmss000002   <none>           <none>


kubectl get pods -o wide | grep client
client                         1/1     Running                  0          26m     10.1.1.207   aks-nodepool1-20058884-vmss000001   <none>           <none>
  • With the client continuously sending traffic to the server, we can observe node-to-node and pod-to-pod traffic being encrypted and sent over distinct interfaces. eth0 and cilium_vxlan.
root@aks-nodepool1-20058884-vmss000002:/# tcpdump -nni eth0 esp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
^C12:13:20.738975 IP 10.224.0.4 > 10.224.0.5: ESP(spi=0x00000003,seq=0x82600), length 148
12:13:20.869037 IP 10.224.0.6 > 10.224.0.5: ESP(spi=0x00000003,seq=0xffc5a), length 148
12:13:21.081764 IP 10.224.0.6 > 10.224.0.5: ESP(spi=0x00000003,seq=0xffc5b), length 148
12:13:21.887047 IP 10.224.0.4 > 10.224.0.5: ESP(spi=0x00000003,seq=0x82601), length 148
12:13:21.938140 IP 10.224.0.6 > 10.224.0.5: ESP(spi=0x00000003,seq=0xffc5c), length 148
12:13:22.167485 IP 10.224.0.6 > 10.224.0.5: ESP(spi=0x00000003,seq=0xffc5d), length 148
12:13:22.964647 IP 10.224.0.6 > 10.224.0.5: ESP(spi=0x00000003,seq=0xffc5e), length 148
12:13:23.165258 IP 10.224.0.4 > 10.224.0.5: ESP(spi=0x00000003,seq=0x82602), length 148
12:13:23.210342 IP 10.224.0.6 > 10.224.0.5: ESP(spi=0x00000003,seq=0xffc5f), length 148
12:13:23.999099 IP 10.224.0.6 > 10.224.0.5: ESP(spi=0x00000003,seq=0xffc60), length 148
root@aks-nodepool1-20058884-vmss000002:/# tcpdump -nni cilium_vxlan esp

tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on cilium_vxlan, link-type EN10MB (Ethernet), snapshot length 262144 bytes
12:08:19.096182 IP 10.1.1.106 > 10.1.2.45: ESP(spi=0x00000003,seq=0x508c), length 96
12:08:19.096427 IP 10.1.2.45 > 10.1.1.106: ESP(spi=0x00000003,seq=0x4d87), length 96
12:08:19.096973 IP 10.1.1.106 > 10.1.2.45: ESP(spi=0x00000003,seq=0x508d), length 88
12:08:19.097657 IP 10.1.1.106 > 10.1.2.45: ESP(spi=0x00000003,seq=0x508e), length 156
12:08:19.097725 IP 10.1.2.45 > 10.1.1.106: ESP(spi=0x00000003,seq=0x4d88), length 88
12:08:19.105145 IP 10.1.2.45 > 10.1.1.106: ESP(spi=0x00000003,seq=0x4d89), length 320
12:08:19.105265 IP 10.1.2.45 > 10.1.1.106: ESP(spi=0x00000003,seq=0x4d8a), length 704
12:08:19.105450 IP 10.1.1.106 > 10.1.2.45: ESP(spi=0x00000003,seq=0x508f), length 88
12:08:19.105530 IP 10.1.1.106 > 10.1.2.45: ESP(spi=0x00000003,seq=0x5090), length 88
12:08:19.106045 IP 10.1.1.106 > 10.1.2.45: ESP(spi=0x00000003,seq=0x5091), length 88

Conclusion

Hopefully, this post gave you a good overview of enabling Multicast in the cloud with Isovlaent Enterprise for Cilium and using IPsec transparent encryption to enable traffic encryption between pods across nodes. If you’d like to learn more, you can schedule a demo with our experts.

Try it out

Start with the Multicast lab and see how to enable multicast in your enterprise environment.

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Related

Blogs

Isovalent Enterprise for Cilium 1.15: eBPF-based IP Multicast, BGP support for Egress Gateway, Network Policy Change Tracker, and more!

Learn about the new features in Isovalent Enterprise for Cilium, including native IP multicast support!

By
Nico VibertDean LewisRaphaël Pinson
Blogs

Cilium in Azure Kubernetes Service (AKS)

In this tutorial, users will learn how to deploy Isovalent Enterprise for Cilium on your AKS cluster from Azure Marketplace on a new cluster and also upgrade an existing cluster from an AKS cluster running Azure CNI powered by Cilium to Isovalent Enterprise for Cilium.

By
Amit Gupta

Industry insights you won’t delete. Delivered to your inbox weekly.