Back to blog

Enabling Enterprise features for Cilium in Azure Kubernetes Service (AKS)

Amit Gupta
Amit Gupta
Published: Updated: Isovalent
Enabling Enterprise features for Cilium in Azure Kubernetes Service (AKS)

“At any given moment, you have the power to say, this is not how the story will end.”—Christine Mason Miller. With that, we resume where we left off while Deploying Isovalent Enterprise for Cilium. This tutorial teaches you how to enable Enterprise features in an Azure Kubernetes Service (AKS) cluster running Isovalent Enterprise for Cilium.

What is Isovalent Enterprise for Cilium?

Isovalent Cilium Enterprise is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.

Why Isovalent Enterprise for Cilium?

For enterprise customers requiring support and usage of Advanced Networking, Security, and Observability features, “Isovalent Enterprise for Cilium” is recommended with the following benefits:

  • Advanced network policy: Isovalent Cilium Enterprise provides advanced network policy capabilities, including DNS-aware policy, L7 policy, and deny policy, enabling fine-grained control over network traffic for micro-segmentation and improved security.
  • Hubble flow observability + User Interface: Isovalent Cilium Enterprise Hubble observability feature provides real-time network traffic flow, policy visualization, and a powerful User Interface for easy troubleshooting and network management.
  • Multi-cluster connectivity via Cluster Mesh: Isovalent Cilium Enterprise provides seamless networking and security across multiple clouds, including public cloud providers like AWS, Azure, and Google Cloud Platform, as well as on-premises environments.
  • Advanced Security Capabilities via Tetragon: Tetragon provides advanced security capabilities such as protocol enforcement, IP and port whitelisting, and automatic application-aware policy generation to protect against the most sophisticated threats. Built on eBPF, Tetragon can easily scale to meet the needs of the most demanding cloud-native environments.
  • Service Mesh: Isovalent Cilium Enterprise provides sidecar-free, seamless service-to-service communication and advanced load balancing, making it easy to deploy and manage complex microservices architectures.
  • Enterprise-grade support: Isovalent Cilium Enterprise includes enterprise-grade support from Isovalent’s experienced team of experts, ensuring that any issues are resolved promptly and efficiently. Additionally, professional services help organizations deploy and manage Cilium in production environments.

Pre-Requisites

The following prerequisites need to be taken into account before you proceed with this tutorial:

What is in it for my Enterprise?

Isovalent Enterprise provides a range of advanced enterprise features you will learn from this tutorial.

Layer 3/ Layer4 Policy

When using Cilium, endpoint IP addresses are irrelevant when defining security policies. Instead, you can use the labels assigned to the pods to define security policies. The policies will be applied to the right pods based on the labels, irrespective of where or when they run within the cluster.

The layer 3 policy establishes the base connectivity rules regarding which endpoints can talk to each other. 

The layer 4 policy can be specified independently or independently in addition to the layer 3 policies. It restricts an endpoint’s ability to emit and/or receive packets on a particular port using a particular protocol.

You can take a Star Wars-inspired example in which there are three microservices applications: deathstar, tiefighter, and xwing. The deathstar runs an HTTP web service on port 80, which is exposed as a Kubernetes Service to load-balance requests to deathstar across two pod replicas. The deathstar service provides landing services to the empire’s spaceships so that they can request a landing port. The tiefighter pod represents a landing-request client service on a typical empire ship, and xwing represents a similar service on an alliance ship. They exist so that you can test different security policies for access control to deathstar landing services.

Validate L3/ L4 Policies

  • Deploy three services: deathstar, xwing, and firefighter
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.13/examples/minikube/http-sw-app.yaml
service/deathstar created
deployment.apps/deathstar created
pod/tiefighter created
pod/xwing created
  • Kubernetes will deploy the pods and service in the background.
  • Running kubectl get pods,svc will inform you about the progress of the operation.
kubectl get pods,svc
NAME                             READY   STATUS    RESTARTS   AGE
pod/client                       1/1     Running   0          23h
pod/deathstar-54bb8475cc-4gcv4   1/1     Running   0          3m9s
pod/deathstar-54bb8475cc-lq6sv   1/1     Running   0          3m9s
pod/server                       1/1     Running   0          23h
pod/tiefighter                   1/1     Running   0          3m9s
pod/xwing                        1/1     Running   0          3m9s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/deathstar    ClusterIP   10.0.114.36    <none>        80/TCP    3m9s
service/kubernetes   ClusterIP   10.0.0.1       <none>        443/TCP   4d1h
  • Check basic access
    • From the perspective of the deathstar service, only the ships with the label org=empire are allowed to connect and request landing. Since you have no rules enforced, both xwing and tiefighter can request landing.
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
  • You can start with the basic policy restricting deathstar landing requests to only the ships that have a label org=empire. This will not allow any ships that don’t have the org=empire label to even connect with the deathstar service. This simple policy filters only on IP protocol (network layer 3) and TCP protocol (network layer 4), often called an L3/L4 network security policy.
  • The above policy whitelists traffic sent from any pods with the label org=empire to deathstar pods with the label org=empire, class=deathstar on TCP port 80.
  • You can now apply this L3/L4 policy:
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.13/examples/minikube/sw_l3_l4_policy.yaml
ciliumnetworkpolicy.cilium.io/rule1 created
  • If you run the landing requests, only the tiefighter pods with the label org=empire will succeed. The xwing pods will be blocked.
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
  • Now, the same request run from an xwing pod will fail:
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

This request will hang, so press Control-C to kill the curl request or wait for it to time out.

HTTP-aware L7 Policy

Layer 7 policy rules are embedded into Layer 4 Example rules and can be specified for ingress and egress. A layer 7 request is permitted if at least one of the rules matches. If no rules are specified, then all traffic is permitted. If a layer 4 rule is specified in the policy, and a similar layer 4 rule with layer 7 rules is also specified, then the layer 7 portions of the latter rule will have no effect.

Note-

  • The feature is available in a “Beta status” as of now. For production use, you can contact support@isovalent.com and sales@isovalent.com
  • For wireguard encryption, l7Proxy is set to False, and hence, it is recommended that users enable the same by updating the ARM template or via Azure CLI. 
  • This will be available in an upcoming release.

To provide the strongest security (i.e., enforce least-privilege isolation) between microservices, each service that calls deathstar’s API should be limited to making only the set of HTTP requests it requires for legitimate operation.

For example, consider that the deathstar service exposes some maintenance APIs that random empire ships should not call.

kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Panic: deathstar exploded

goroutine 1 [running]:
main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)
        /code/src/github.com/empire/deathstar/
        temp/main.go:9 +0x64
main.main()
        /code/src/github.com/empire/deathstar/
        temp/main.go:5 +0x85

Cilium can enforce HTTP-layer (i.e., L7) policies to limit the firefighter pod’s ability to reach URLs. 

Validate L7 Policy

  • Apply L7 Policy—An example policy file extends our original policy by limiting tiefighter to making only a POST /v1/request-landing API call and disallowing all other calls (including PUT /v1/exhaust port).
  • Update the existing rule (from the L3/L4 section) to apply L7-aware policy to protect deathstar
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.13/examples/minikube/sw_l3_l4_l7_policy.yaml
ciliumnetworkpolicy.cilium.io/rule1 configured
  • re-run a curl towards deathstar & exhaust port
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed

kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Access denied
  • As this rule builds on the identity-aware rule, traffic from pods without the label org=empire will continue to be dropped, causing the connection to time out:
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

DNS-Based Policies

DNS-based policies are useful for controlling access to services outside the Kubernetes cluster. DNS acts as a persistent service identifier for both external services provided by Google and internal services, such as database clusters running in private subnets outside Kubernetes. CIDR or IP-based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently. The Cilium DNS-based policies provide an easy mechanism to specify access control, while Cilium manages the harder aspects of tracking DNS to IP mapping.

Validate DNS-Based Policies

  • In line with our Star Wars theme examples, you can use a simple scenario where the Empire’s mediabot pods need access to GitHub to manage the Empire’s git repositories. The pods shouldn’t have access to any other external service.
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.13.4/examples/kubernetes-dns/dns-sw-app.yaml
pod/mediabot created
  • Apply DNS Egress Policy- The following Cilium network policy allows mediabot pods to only access api.github.com
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.13.4/examples/kubernetes-dns/dns-matchname.yaml
ciliumnetworkpolicy.cilium.io/fqdn created
  • Testing the policy, you can see that mediabot has access to api.github.com but doesn’t have access to any other external service, e.g., support.github.com
kubectl exec mediabot -- curl -I -s https://api.github.com | head -1
HTTP/1.1 200 OK

kubectl exec mediabot -- curl -I -s https://support.github.com | head -1
curl: (28) Connection timed out after 5000 milliseconds
command terminated with exit code 28

This request will hang, so press Control-C to kill the curl request or wait for it to time out.

Combining DNS, Port, and L7 Rules

The DNS-based policies can be combined with port (L4) and API (L7) rules to restrict access further. In our example, you can restrict mediabot pods to access GitHub services only on port 443. The toPorts section in the policy below achieves the port-based restrictions and DNS-based policies.

Validate the combination of DNS, Port, and L7-based Rules

  • Applying the policy
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.13.4/examples/kubernetes-dns/dns-port.yaml
ciliumnetworkpolicy.cilium.io/fqdn configured
  • Testing, the access to https://support.github.com on port 443 will succeed, but the access to http://support.github.com on port 80 will be denied.
kubectl exec mediabot -- curl -I -s https://support.github.com | head -1
HTTP/1.1 200 OK

kubectl exec mediabot -- curl -I -s --max-time 5 http://support.github.com | head -1
command terminated with exit code 28

Encryption

Cilium supports the transparent encryption of Cilium-managed host traffic and traffic between Cilium-managed endpoints using WireGuard® or IPsec. In this tutorial, you will learn about Wireguard.

Note-

  • The feature is available in a “Beta status” as of now. For production use, you can contact support@isovalent.com and sales@isovalent.com
  • For wireguard encryption, l7Proxy is set to False, and hence, users should disable the same by updating the ARM template or via Azure CLI.

Wireguard

When WireGuard is enabled in Cilium, the agent running on each cluster node will establish a secure WireGuard tunnel between it and all other known nodes.

Packets are not encrypted when destined to the same node from which they were sent. This behavior is intended. Encryption would provide no benefits, given that the raw traffic can be observed on the node anyway.

Validate Wireguard Encryption

  • To demonstrate Wireguard encryption, users can create a client pod spun up on one node and a server pod spun up on another in AKS.
    • The client does a “wget” towards the server every 2 seconds.
kubectl get pods -o wide
NAME     READY   STATUS    RESTARTS   AGE   IP                   NODE                                NOMINATED NODE   READINESS GATES
client   1/1     Running   0         4s    192.168.1.30   aks-nodepool1-18458950-vmss000000   <none>           <none>
server   1/1     Running   0       16h   192.168.0.38   aks-nodepool1-18458950-vmss000001   <none>           <none>
  • Run a bash shell in one of the Cilium pods with kubectl -n kube-system exec -ti ds/cilium -- bash and execute the following commands:
    • Check that WireGuard has been enabled (the number of peers should correspond to the number of nodes subtracted by one):
kubectl -n kube-system exec -it cilium-mgscb -- cilium status | grep Encryption
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init), block-wireserver (init)
Encryption: Wireguard [cilium_wg0 (Pubkey: ###########################################, Port: 51871, Peers: 1)]

kubectl -n kube-system exec -it cilium-vr497 -- cilium status | grep Encryption
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init), block-wireserver (init)
Encryption: Wireguard [cilium_wg0 (Pubkey: ###########################################, Port: 51871, Peers: 1)]
  • Install tcpdump on the node where the server pod has been created.
apt-get update
apt-get -y install tcpdump
  • Check that traffic (HTTP requests and responses) is sent via the cilium_wg0 tunnel device on the node where the server pod has been created:
tcpdump -n -i cilium_wg0

07:04:23.294242 IP 192.168.1.30.40170 > 192.168.0.38.80: Flags [P.], seq 1:70, ack 1, win 507, options [nop,nop,TS val 1189809356 ecr 3600600803], length 69: HTTP: GET / HTTP/1.1
07:04:23.294301 IP 192.168.0.38.80 > 192.168.1.30.40170: Flags [.], ack 70, win 502, options [nop,nop,TS val 3600600803 ecr 1189809356], length 0
07:04:23.294568 IP 192.168.0.38.80 > 192.168.1.30.40170: Flags [P.], seq 1:234, ack 70, win 502, options [nop,nop,TS val 3600600803 ecr 1189809356], length 233: HTTP: HTTP/1.1 200 OK
07:04:23.294747 IP 192.168.1.30.40170 > 192.168.0.38.80: Flags [.], ack 234, win 506, options [nop,nop,TS val 1189809356 ecr 3600600803], length 0

Kube-Proxy Replacement

One of the additional benefits of using Cilium is its extremely efficient data plane. It’s particularly useful at scale, as the standard kube-proxy is based on a technology – iptables – that was never designed with the churn and the scale of large Kubernetes clusters.

Validate Kube-Proxy Replacement

  • Users can first validate that the Cilium agent is running in the desired mode with kube-proxy set to Strict:
kubectl exec -it -n kube-system cilium-cfrng -- cilium status --verbose
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init), block-wireserver (init)
KVStore:                Ok   Disabled
Kubernetes:             Ok   1.25 (v1.25.6) [linux/amd64]
Kubernetes APIs:        ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:   Strict   [eth0 10.10.0.4 (Direct Routing)]
  • You can also check that kube-proxy is not running as a daemonset on the AKS cluster.
kubectl get ds -A
NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   azure-cns                    3         3         3       3            3           <none>                   19m
kube-system   azure-cns-win                0         0         0       0            0           <none>                   19m
kube-system   azure-ip-masq-agent          3         3         3       3            3           <none>                   19m
kube-system   cilium                       3         3         3       3            3           kubernetes.io/os=linux   18m
kube-system   cloud-node-manager           3         3         3       3            3           <none>                   19m
kube-system   cloud-node-manager-windows   0         0         0       0            0           <none>                   19m
kube-system   csi-azuredisk-node           3         3         3       3            3           <none>                   19m
kube-system   csi-azuredisk-node-win       0         0         0       0            0           <none>                   19m
kube-system   csi-azurefile-node           3         3         3       3            3           <none>                   19m
kube-system   csi-azurefile-node-win       0         0         0       0            0           <none>                   19m
  • You can deploy nginx pods, create a new NodePort service, and validate that Cilium installed the service correctly.
  • The following yaml is used for the backend pods:
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 50
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80
  • Verify that the NGINX pods are up and running: 
kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP              NODE                                NOMINATED NODE   READINESS GATES
my-nginx-77d5cb496b-69wtt   1/1     Running   0          46s   192.168.0.100   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-6nh8d   1/1     Running   0          46s   192.168.1.171   aks-nodepool1-21972290-vmss000001   <none>           <none>
my-nginx-77d5cb496b-h9mxv   1/1     Running   0          46s   192.168.0.182   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-hnl6j   1/1     Running   0          46s   192.168.1.63    aks-nodepool1-21972290-vmss000001   <none>           <none>
my-nginx-77d5cb496b-mtnm9   1/1     Running   0          46s   192.168.0.170   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-pgvzj   1/1     Running   0          46s   192.168.0.237   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-rhx9q   1/1     Running   0          46s   192.168.1.247   aks-nodepool1-21972290-vmss000001   <none>           <none>
my-nginx-77d5cb496b-w65kj   1/1     Running   0          46s   192.168.0.138   aks-nodepool1-21972290-vmss000000   <none>           <none>
my-nginx-77d5cb496b-xr96h   1/1     Running   0          46s   192.168.1.152   aks-nodepool1-21972290-vmss000001   <none>           <none>
my-nginx-77d5cb496b-zcwk5   1/1     Running   0          46s   192.168.1.75    aks-nodepool1-21972290-vmss000001   <none>           <none>
  • Create a NodePort service for the instances:
kubectl expose deployment my-nginx --type=NodePort --port=80
  • Verify that the NodePort service has been created:
kubectl get svc my-nginx
NAME       TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
my-nginx   NodePort   10.0.87.130   <none>        80:31076/TCP   2s
  • With the help of the cilium service list command, validate that Cilium’s eBPF kube-proxy replacement created the new NodePort services under port 31076: (Truncated O/P)
9    10.0.87.130:80     ClusterIP      1 => 192.168.0.170:80 (active)
                                       2 => 192.168.1.152:80 (active)

10   10.10.0.5:31076    NodePort       1 => 192.168.0.170:80 (active)
                                       2 => 192.168.1.152:80 (active)

11   0.0.0.0:31076      NodePort       1 => 192.168.0.170:80 (active)
                                       2 => 192.168.1.152:80 (active)
  • At the same time, you can verify using iptables in the host namespace (on the node), there are no iptables rules for the service are present: 
iptables-save | grep KUBE-SVC
  • Last but not least, a simple curl test shows connectivity for the exposed NodePort port 31076 as well as for the ClusterIP: 
curl 127.0.0.1:31076 -I
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 27 Jun 2023 13:31:00 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"
Accept-Ranges: bytes

curl 10.0.87.130 -I
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 27 Jun 2023 13:31:31 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"
Accept-Ranges: bytes

curl 10.10.0.5:31076 -I
HTTP/1.1 200 OK
Server: nginx/1.25.1
Date: Tue, 27 Jun 2023 13:31:47 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT
Connection: keep-alive
ETag: "6488865a-267"
Accept-Ranges: bytes

Upgrade AKS clusters running kube-proxy

Note- When I initially wrote this blog post, kube-proxy was enabled as a daemonset. You can now create or upgrade an AKS cluster on Isovalent Enterprise for Cilium, and your AKS clusters will no longer have a Kube-proxy-based iptables implementation.

  • This example shows a service created running a Kube-Proxy-based implementation on an AKS cluster (not running Isovalent Enterprise for Cilium).
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ipv4
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-ipv4
  template:
    metadata:
      labels:
        app: nginx-ipv4
    spec:
      containers:
        - name: nginx-ipv4
          image: nginx:1.25.1
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-ipv4
spec:
  selector:
    app: nginx-ipv4
  type: LoadBalancer
  externalTrafficPolicy: Cluster
  ipFamilyPolicy: SingleStack
  ports:
    - port: 80
      targetPort: 80
  • The AKS cluster is then upgraded to Isovalent Enterprise for Cilium. As you can see, traffic works seamlessly.
  • There are no more iptables but Cilium endpoints that come into play.

Hubble CLI

Hubble’s CLI extends the visibility that is provided by standard kubectl commands like kubectl get pods to give you more network-level details about a request, such as its status and the security identities associated with its source and destination.

The Hubble CLI can be leveraged to observe network flows from Cilium agents. Users can observe the flows from their local machine workstation for troubleshooting or monitoring. For this tutorial, users can see that all hubble outputs are related to the tests that are done above. Users can try other tests and see the same results with different values as expected.

Setup Hubble Relay Forwarding

Use the kubectl port forward to hubble-relay, then edit the hubble config to point at the remote hubble server component.

kubectl port-forward -n kube-system svc/hubble-relay --address 0.0.0.0 4245:80

Hubble Status

Hubble status can check the overall health of Hubble within your cluster. If using Hubble Relay, a counter for the number of connected nodes will appear in the last line of the output.

hubble status 
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 8,190/8,190 (100.00%)
Flows/s: 30.89
Connected Nodes: 2/2

View Last N Events

hubble observe displays the most recent events based on the number filter. Hubble Relay will display events over all the connected nodes:

hubble observe --last 5

Jun 26 07:59:01.759: 10.10.0.5:37668 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jun 26 07:59:01.759: 10.10.0.5:37666 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jun 26 07:59:01.759: 10.10.0.5:37668 (host) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-stack FORWARDED (TCP Flags: ACK, FIN)
Jun 26 07:59:01.759: 10.10.0.5:37668 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK)
Jun 26 07:59:01.759: 10.10.0.5:37666 (host) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) to-stack FORWARDED (TCP Flags: ACK, FIN)

Follow Events in Real-Time

hubble observe --follow will follow the event stream for all connected clusters.

hubble observe --follow

Jun 26 08:09:47.938: 10.10.0.4:55976 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK, FIN)
Jun 26 08:09:47.938: default/tiefighter:54512 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jun 26 08:09:47.938: default/tiefighter:54512 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jun 26 08:09:47.938: default/tiefighter:54512 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))

Troubleshooting HTTP & DNS

  • Suppose you have a CiliumNetworkPolicy that enforces DNS or HTTP policy. In that case, you can use the –type l7 filtering options for hubble to check our applications’ HTTP methods and DNS resolution attempts.
hubble observe --since 1m -t l7

Jun 26 08:15:17.888: default/tiefighter:46930 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-request FORWARDED (HTTP/1.1 POST http://deathstar.default.svc.cluster.local/v1/request-landing)
Jun 26 08:15:17.888: default/tiefighter:46930 (ID:5211) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-response FORWARDED (HTTP/1.1 200 0ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing))
Jun 26 08:15:18.384: default/tiefighter:46932 (ID:5211) -> default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jun 26 08:15:18.384: default/tiefighter:46932 (ID:5211) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))
  • You can use --http-status to view specific flows with 200 HTTP responses
hubble observe --http-status 200

Jun 26 08:18:00.885: default/tiefighter:53064 (ID:5211) <- default/deathstar-54bb8475cc-bzkcs:80 (ID:12749) http-response FORWARDED (HTTP/1.1 200 0ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing))
Jun 26 08:18:07.510: default/tiefighter:43448 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-response FORWARDED (HTTP/1.1 200 1ms (POST http://deathstar.default.svc.cluster.local/v1/request-landing))
  • You can also show HTTP PUT methods with --http-method
hubble observe --http-method PUT

Jun 26 08:19:51.270: default/tiefighter:55354 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jun 26 08:19:51.270: default/tiefighter:55354 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))
  • To view DNS traffic for a specific FQDN, you can use the --to-fqdn flag
hubble observe --to-fqdn "*.github.com"

Jun 26 10:34:34.196: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) policy-verdict:all EGRESS ALLOWED (TCP Flags: SYN)
Jun 26 10:34:34.196: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) to-stack FORWARDED (TCP Flags: SYN)
Jun 26 10:34:34.198: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) to-stack FORWARDED (TCP Flags: ACK)
Jun 26 10:34:34.198: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jun 26 10:34:34.200: default/mediabot:37956 (ID:37570) -> support.github.com:80 (ID:16777222) to-stack FORWARDED (TCP Flags: ACK, FIN)

Filter by Verdict

Hubble provides a field called VERDICT that displays one of FORWARDED, ERROR, or DROPPED for each flow. DROPPED could indicate an unsupported protocol within the underlying platform or Network Policy enforcing pod communication. Hubble can introspect the reason for ERROR or DROPPED flows and display the reason within the TYPE field of each flow.

hubble observe --output table --verdict DROPPED

Jun 26 08:18:59.517   default/tiefighter:37562   default/deathstar-54bb8475cc-dp646:80   http-request   DROPPED   HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port
Jun 26 08:19:12.451   default/tiefighter:35394   default/deathstar-54bb8475cc-dp646:80   http-request   DROPPED   HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port

Filter by Pod or Namespace

  • To show all flows for a specific pod, filter with the --pod flag
hubble observe --from-pod default/server

Jun 26 08:25:00.001: default/client:36732 (ID:23611) <- default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jun 26 08:25:00.001: default/client:36732 (ID:23611) <- default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK, FIN, PSH)
Jun 26 08:25:00.001: default/client:36732 (ID:23611) <- default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: ACK, FIN, PSH)
  • If you are only interested in traffic from a pod to a specific destination, combine --from-pod and --to-pod
hubble observe --from-pod default/client --to-pod default/server

Jun 26 08:26:38.290: default/client:41968 (ID:23611) -> default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: SYN)
Jun 26 08:26:38.290: default/client:41968 (ID:23611) -> default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: SYN)
Jun 26 08:26:38.291: default/client:41968 (ID:23611) -> default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK)
Jun 26 08:26:38.291: default/client:41968 (ID:23611) -> default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: ACK)
  • If you want to see all traffic from a specific namespace, specify the --from-namespace
hubble observe --from-namespace default

Jun 26 08:28:18.591: default/client:55870 (ID:23611) -> default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK, PSH)
Jun 26 08:28:18.592: default/client:55870 (ID:23611) <- default/server:80 (ID:36535) to-stack FORWARDED (TCP Flags: ACK, FIN, PSH)
Jun 26 08:28:18.592: default/client:55870 (ID:23611) <- default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: ACK, PSH)
Jun 26 08:28:18.592: default/client:55870 (ID:23611) <- default/server:80 (ID:36535) to-endpoint FORWARDED (TCP Flags: ACK, FIN, PSH)
Jun 26 08:28:19.029: default/tiefighter:60802 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-stack FORWARDED (TCP Flags: SYN)
Jun 26 08:28:19.030: default/tiefighter:60802 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) policy-verdict:L3-L4 INGRESS ALLOWED (TCP Flags: SYN)
Jun 26 08:28:19.030: default/tiefighter:60802 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-proxy FORWARDED (TCP Flags: SYN)
Jun 26 08:28:19.030: default/tiefighter:60802 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) to-endpoint FORWARDED (TCP Flags: SYN, ACK)
Jun 26 08:28:19.032: default/tiefighter:60802 (ID:5211) -> default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-request DROPPED (HTTP/1.1 PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port)
Jun 26 08:28:19.032: default/tiefighter:60802 (ID:5211) <- default/deathstar-54bb8475cc-dp646:80 (ID:12749) http-response FORWARDED (HTTP/1.1 403 0ms (PUT http://deathstar.default.svc.cluster.local/v1/exhaust-port))

Filter Events with JQ

To view filter events through the jq tool, you can swap the output to json mode. Visualize your metadata through jq, which will help you see more metadata around the workload labels like pod name/namespace assigned to both source and destination. This information is accessible by Cilium because it is encoded in the packets based on pod identities.

hubble observe --output json | jq . | head -n 50

{
  "flow": {
    "time": "2023-06-26T08:30:06.145430409Z",
    "verdict": "FORWARDED",
    "ethernet": {
      "source": "76:2f:51:6e:8e:b4",
      "destination": "da:f3:2b:fc:25:fe"
    },
    "IP": {
      "source": "10.10.0.4",
      "destination": "192.168.0.241",
      "ipVersion": "IPv4"
    },
    "l4": {
      "TCP": {
        "source_port": 56198,
        "destination_port": 8080,
        "flags": {
          "SYN": true
        }
      }
    },
    "source": {
      "identity": 1,
      "labels": [
        "reserved:host"
      ]
    },
    "destination": {
      "ID": 60,
      "identity": 1008,
      "namespace": "kube-system",
      "labels": [
        "k8s:io.cilium.k8s.namespace.labels.addonmanager.kubernetes.io/mode=Reconcile",
        "k8s:io.cilium.k8s.namespace.labels.control-plane=true",
        "k8s:io.cilium.k8s.namespace.labels.kubernetes.io/cluster-service=true",
        "k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system",
        "k8s:io.cilium.k8s.policy.cluster=default",
        "k8s:io.cilium.k8s.policy.serviceaccount=coredns-autoscaler",
        "k8s:io.kubernetes.pod.namespace=kube-system",
        "k8s:k8s-app=coredns-autoscaler",
        "k8s:kubernetes.azure.com/managedby=aks"
      ],
      "pod_name": "coredns-autoscaler-69b7556b86-wrkqx",
      "workloads": [
        {
          "name": "coredns-autoscaler",
          "kind": "Deployment"
        }
      ]

Hubble UI

Note- To obtain the helm values to install Hubble UI and access the Enterprise documentation, you need to reach out to sales@isovalent.com and support@isovalent.com

The graphical user interface (Hubble-Ui) utilizes relay-based visibility to provide a graphical service dependency and connectivity map. Hubble-UI is enabled via helm charts. The feature is not enabled when you create a new cluster using Isovalent Enterprise for Cilium or upgrade to Isovalent Enterprise for Cilium.

Once the installation is complete, you will notice hubble-ui running as a daemonset, and also, the pods are up and running:

kubectl get ds -A
NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   azure-cns                    2         2         2       2            2           <none>                   10d
kube-system   azure-cns-win                0         0         0       0            0           <none>                   10d
kube-system   cilium                       2         2         2       2            2           kubernetes.io/os=linux   10d
kube-system   cilium-dnsproxy              2         2         2       2            2           <none>                   8d
kube-system   cloud-node-manager           2         2         2       2            2           <none>                   10d
kube-system   cloud-node-manager-windows   0         0         0       0            0           <none>                   10d
kube-system   csi-azuredisk-node           2         2         2       2            2           <none>                   10d
kube-system   csi-azuredisk-node-win       0         0         0       0            0           <none>                   10d
kube-system   csi-azurefile-node           2         2         2       2            2           <none>                   10d
kube-system   csi-azurefile-node-win       0         0         0       0            0           <none>                   10d
kube-system   hubble-enterprise            2         2         2       2            2           <none>                   8d


kubectl get pods -n hubble-ui
NAME                         READY   STATUS    RESTARTS   AGE
hubble-ui-54c9cbfb78-2vqgb   2/2     Running   0          8d

Validate the installation

To access Hubble UI, forward a local port to the Hubble UI service:

kubectl port-forward -n hubble-ui svc/hubble-ui 12000:80

Then, open http://localhost:12000 in your browser: 

Select the namespace by default, and you will observe a service map and network event list. In this case, the pod mediabot (created in the previous test case) is trying to access support.github.com over port 443

Note- You can read more on hubble in a detailed blog post that is a 3-part series.

DNS Proxy HA

Cilium Enterprise supports deploying an additional DNS Proxy daemonset called cilium-dnsproxy that can be life-cycled independently of Cilium daemonset.

What is Cilium DNS Proxy HA?

When Cilium Network Policies that make use of toFQDNs are installed in a Kubernetes cluster, the Cilium agent starts an in-process DNS proxy that becomes responsible for proxying all DNS requests between all pods and the Kubernetes internal kube-dns service. Whenever a Cilium agent is restarted, such as during an upgrade or due to something unexpected, DNS requests from all pods on that node do not succeed until the Cilium agent is online again.

When cilium-dnsproxy is enabled, an independently life-cycled DaemonSet is deployed. cilium-dnsproxy acts as a hot standby that mirrors DNS policy rules. cilium-agent and cilium-dnsproxy bind to the same port, relying on the Linux kernel to distribute DNS traffic between the two DNS proxy instances. This allows you to lifecycle either cilium or cilium-dnsproxy daemonset without impacting DNS traffic.

Installation of DNS-Proxy HA using helm

Note-

  • DNS Proxy High Availability relies on configuring the cilium-config ConfigMap with external-dns-proxy: true and to deploy the DNS proxy component.

DNS-Proxy HA is enabled via helm charts. The feature is not enabled when you create a new cluster using Isovalent Enterprise for Cilium or upgrade to Isovalent Enterprise for Cilium.

Once the installation is complete, you will notice cilium-dnsproxy running as a daemonset, and also the pods are up and running:

kubectl get ds -A
NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   azure-cns                    2         2         2       2            2           <none>                   10d
kube-system   azure-cns-win                0         0         0       0            0           <none>                   10d
kube-system   cilium                       2         2         2       2            2           kubernetes.io/os=linux   10d
kube-system   cilium-dnsproxy              2         2         2       2            2           <none>                   8d
kube-system   cloud-node-manager           2         2         2       2            2           <none>                   10d
kube-system   cloud-node-manager-windows   0         0         0       0            0           <none>                   10d
kube-system   csi-azuredisk-node           2         2         2       2            2           <none>                   10d
kube-system   csi-azuredisk-node-win       0         0         0       0            0           <none>                   10d
kube-system   csi-azurefile-node           2         2         2       2            2           <none>                   10d
kube-system   csi-azurefile-node-win       0         0         0       0            0           <none>                   10d
kube-system   hubble-enterprise            2         2         2       2            2           <none>                   8d

kubectl get pods -A -n kube-system | grep "cilium-dnsproxy-*"
kube-system                     cilium-dnsproxy-d5qgd                 1/1     Running   0          8d
kube-system                     cilium-dnsproxy-t52rn                 1/1     Running   0          8d

Validate DNS-Proxy HA

  • In line with our Star Wars theme examples, you can use a simple scenario where the Empire’s mediabot pods need access to GitHub to manage the Empire’s git repositories. The pods shouldn’t have access to any other external service.
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.13.4/examples/kubernetes-dns/dns-sw-app.yaml
pod/mediabot created
  • Apply DNS Egress Policy- The following Cilium network policy allows mediabot pods to only access api.github.com
kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.13.4/examples/kubernetes-dns/dns-matchname.yaml
ciliumnetworkpolicy.cilium.io/fqdn created
  • Testing the policy, you can see that mediabot pod has access to api.github.com 
kubectl exec mediabot -- curl -I -s https://api.github.com | head -1
HTTP/1.1 200 OK
  • Send packets continuously in a loop.
while true; do kubectl exec mediabot -- curl -I -s https://api.github.com | head -1; sleep 10; done
  • Simulate a failure scenario wherein the cilium-agent pods are not up and running and the traffic still goes through the cilium-dnsproxy-* pods
  • cilium-agent pods are restarted as a part of the test
  • Traffic is not disrupted and continues to flow through the cilium-dnsproxy pods

Tetragon

Tetragon provides powerful security observability and a real-time runtime enforcement platform. The creators of Cilium have built Tetragon and brought the full power of eBPF to the world of security.

Tetragon helps platform and security teams solve the following:

Security Observability:

  • Observing application and system behavior such as process, syscall, file, and network activity
  • Tracing namespace, privilege, and capability escalations
  • File integrity monitoring

Runtime Enforcement:

  • Application of security policies to limit the privileges of applications and processes on a system (system calls, file access, network, kprobes)

Tetragon has been specifically built for Kubernetes and cloud-native infrastructure but can be run on any Linux system. Sometimes, you might want to enable process visibility in an environment without Cilium as the CNI. The Security Observability provided by the Hubble-Enterprise daemonset can operate in a standalone mode, de-coupled from Cilium as a CNI.

Installation of Tetragon using helm

Note- To obtain the helm values to install Tetragon and access to Enterprise documentation, you need to reach out to sales@isovalent.com and support@isovalent.com

Tetragon is enabled via helm charts, as the feature is not enabled when you create a new cluster using Isovalent Enterprise for Cilium or upgrade to Isovalent Enterprise for Cilium.

Once the installation is complete, you will notice Tetragon running as a daemonset, and also, the pods are up and running:

kubectl get ds -A
NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   azure-cns                    2         2         2       2            2           <none>                   10d
kube-system   azure-cns-win                0         0         0       0            0           <none>                   10d
kube-system   cilium                       2         2         2       2            2           kubernetes.io/os=linux   10d
kube-system   cilium-dnsproxy              2         2         2       2            2           <none>                   8d
kube-system   cloud-node-manager           2         2         2       2            2           <none>                   10d
kube-system   cloud-node-manager-windows   0         0         0       0            0           <none>                   10d
kube-system   csi-azuredisk-node           2         2         2       2            2           <none>                   10d
kube-system   csi-azuredisk-node-win       0         0         0       0            0           <none>                   10d
kube-system   csi-azurefile-node           2         2         2       2            2           <none>                   10d
kube-system   csi-azurefile-node-win       0         0         0       0            0           <none>                   10d
kube-system   hubble-enterprise            2         2         2       2            2           <none>                   9d

kubectl get pods -A -n kube-system | grep "hubble-enterprise"
kube-system                     hubble-enterprise-tkgss               2/2     Running   0          9d
kube-system                     hubble-enterprise-tmfqk               2/2     Running   0          9d

Validate Tetragon

You can use our Demo Application to explore the Process and Networking Events:
  • Create a namespace
kubectl create namespace tenant-jobs
kubectl get pods -n tenant-jobs
NAME                             READY   STATUS    RESTARTS      AGE
coreapi-644c789b57-v2rtz         1/1     Running   1 (74s ago)   90s
crawler-5ddd94f6c8-hht9j         1/1     Running   0             81s
elasticsearch-775f697cdf-fgv28   1/1     Running   0             89s
jobposting-55999b9478-4p9qx      1/1     Running   0             94s
kafka-0                                        1/1     Running   0             86s
loader-7958959495-6bqvp          1/1     Running   3 (39s ago)   82s
recruiter-7c9d56f68c-8pc24       1/1     Running   0             92s
zookeeper-f5b8f8465-z8w84        1/1     Running   0             85s
  • You can view all the pods in the tenant-jobs namespace in hubble-ui
  • You can examine the Process and Networking Events in two different ways:
    • Raw Json events
      • kubectl logs -n kube-system ds/hubble-enterprise -c export-stdout -f
    • Enable Hubble UI– The second way is to visualize the processes running on a certain workload by observing their Process Ancestry Tree. This tree gives you rich Kubernetes API, identity-aware metadata, and OS-level process visibility about the executed binary, its parents, and the execution time up until dockerd has started the container.
    • While in a real-world deployment, the Hubble Event Data would likely be exported to an SIEM or other logging datastore, in this quickstart, you will access this Event Data by redirecting the logs of the export-stdout container of the hubble-enterprise pod:
      • kubectl logs -n kube-system ds/hubble-enterprise -c export-stdout > export.log
  • From the main Hubble UI screen, click on the tenant-jobs namespace in the list. Then, in the left navigation sidebar, click Processes.
  • To upload the exported logs, click Upload on the left of the screen:
  • Use the file selector dialog to choose the events.log generated earlier and select the tenants-job namespace from the namespace dropdown.
  • Here, you can get a brief overview of a security use case that can easily be detected and be interesting to visualize. By using Hubble UI and visualizing the Process Ancestry Tree, you can detect a shell execution in the crawler-YYYYYYYYY-ZZZZZ Pod that occurred more than 5 minutes after the container has started. After clicking on the crawler-YYYYYYYY-ZZZZZ pod name from the Pods selector dropdown on the left of the screen, you will be able to see the Process Ancestry Tree for that pod:
  • The Process Ancestry Tree gives us:
    • Rich Kubernetes Identity-Aware Metadata: You can see the name of the team or namespace and the specific application service to be inspected in the first row.
    • OS-Level Process Visibility: You can see all the processes that have been executed on the inspected service or were related to its Pod lifecycle 
    • DNS Aware Metadata: You can see all the external connections with the exact DNS name as an endpoint made from specific processes of the inspected service.
Enabling DNS visibility

Outbound network traffic remains a major attack vector for many enterprises. For example, in the above example, you can see that the crawler service reaches out to one or more services outside the Kubernetes cluster on port 443. However, the identity of these external services is unknown, as the flows only show an IP address.

Cilium Enterprise can parse the DNS-layer requests emitted by services and associate that identity data with outgoing connections, enriching network connectivity logs.

To inspect the DNS lookups for pods within a namespace, you must apply a network policy that tells Cilium to inspect port 53 traffic from pods to Kube-DNS at Layer 7. With this DNS visibility, Hubble flow data will be annotated with DNS service identity for destinations outside the Kubernetes cluster. The demo app keeps reaching out to Twitter at regular intervals, and now both the service map and flows table for the tenant-jobs namespace shows connections to api.twitter.com:

Enabling HTTP Visibility

Like enabling DNS visibility, you can also apply a visibility policy to instruct Cilium to inspect certain traffic flows at the application layer (e.g., HTTP) without requiring any changes to the application itself.

In this example, you’ll inspect ingress connections to services within the tenant-jobs namespace at the HTTP layer. You can inspect flow details to get application-layer information. For example, you can inspect HTTP queries that coreapi make it to elasticsearch:

Network Policy

In this scenario, you will create services, deployments, and pods (created in the above step (Observing Network Flows with Hubble CLI)   and then use the Hubble UI’s network policy editor and service map/flow details pages to create, troubleshoot, and update a network policy.

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.13/examples/minikube/http-sw-app.yaml

Apply an L3/L4 Policy

  • You can start with a basic policy restricting deathstar landing requests to only the ships that have a label (org=empire). This will not allow any ships that don’t have the org=empire label to even connect with the deathstar service. This simple policy filters only on IP protocol (network layer 3) and TCP protocol (network layer 4), often called an L3/L4 network security policy.
  • The above policy whitelists traffic sent from pods with the label (org=empire) to deathstar pods with the label (org=empire, class=deathstar) on TCP port 80.
  • To apply this L3/L4 policy, run:
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.13/examples/minikube/sw_l3_l4_policy.yaml
ciliumnetworkpolicy.cilium.io/rule1 created
  • If you run the landing requests again, only the tiefighter pods with the label org=empire will succeed. The xwing pods will be blocked.
kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
  • The same request ran from an xwing pod will fail:
kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing

This request will hang, so press Control-C to kill the curl request or wait for it to time out. As you can see, Hubble-UI is reporting that the flow will be dropped.

  • Using the policy editor, click on the denied/dropped flows and add them to your policy.
  • You need to download the policy and apply/update it again.
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: rule1
  namespace: default
spec:
  endpointSelector:
    matchLabels:
      org: empire
      class: deathstar
  ingress:
    - fromEndpoints:
        - matchLabels:
            org: empire
      toPorts:
        - ports:
            - port: "80"
              protocol: TCP
    - fromEndpoints:
        - matchLabels:
            k8s:app.kubernetes.io/name: xwing
            k8s:class: xwing
            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name: default
            k8s:io.kubernetes.pod.namespace: default
            k8s:org: alliance
      toPorts:
        - ports:
            - port: "80"
kubectl apply -f rule1.yaml
ciliumnetworkpolicy.cilium.io/rule1 configured
  • Once the updated policy is applied, run the same request from a xwing pod, and that should pass
Enabling TLS visibility

Cilium’s Network Security Observability provides deep insights into network events, such as tls events containing information about the exact TLS connections, including the negotiated cipher suite, the TLS version, the source/destination IP addresses, and ports of the connection made by the initial process. In addition, tls events are also enriched by Kubernetes Identity-Aware metadata (Kubernetes namespace, pod/container name, labels, container image, etc).

By default, Tetragon does not show connectivity-related events. TLS visibility requires a specific tracing policy and a CRD, which we will apply.

Apply a TLS visibility CRD:

Note- Reach out to sales@isovalent.com or support@isovalent.com to get access to the TLS visibility CRD.

  • Let’s see the events we get out of the box. You can use the same pods that were created in the previous step.
  • Create a TLS visibility Policy
apiVersion: cilium.io/v1alpha1
kind: TracingPolicy
metadata:
  name: tls-visibility-policy
spec:
  parser:
    tls:
      enable: true
      mode: socket
      selectors:
        - matchPorts:
            - 443
    tcp:
      enable: true
  • Apply the Policy:
kubectl apply -f tls_parser.yaml
  • Send traffic to see the TLS ciphers and versions being used. From the xwing pod shell, try a simple curl to google.com and use openssl to check ciphers:
curl --silent --output /dev/null --show-error --fail https://www.google.com

openssl s_client -connect badssl.com:443 -cipher DHE-RSA-AES128-SHA
CONNECTED(00000003)

openssl s_client -connect badssl.com:443 -cipher DHE-RSA-AES256-SHA
CONNECTED(00000003)
  • Check the events in a different terminal:
kubectl exec -it -n kube-system daemonsets/hubble-enterprise -c enterprise -- \
  hubble-enterprise getevents -o compact --pods xwing

🚀 process default/xwing /usr/bin/openssl s_client -connect badssl.com:443 -cipher DHE-RSA-AES128-SHA
🔌 connect default/xwing /usr/bin/openssl TCP 10.241.0.20:54552 => 104.154.89.105:443
🔐 tls     default/xwing /usr/bin/openssl 104.154.89.105:443  TLS1.2 TLS_DHE_RSA_WITH_AES_128_CBC_SHA
💥 exit    default/xwing /usr/bin/openssl s_client -connect badssl.com:443 -cipher DHE-RSA-AES128-SHA SIGINT
🧹 close   default/xwing /usr/bin/openssl TCP

10.241.0.20:54552 => 104.154.89.105:443 tx 441 B rx 2.4 kB
🚀 process default/xwing /usr/bin/openssl s_client -connect badssl.com:443 -cipher DHE-RSA-AES256-SHA
🔌 connect default/xwing /usr/bin/openssl TCP 10.241.0.20:45918 => 104.154.89.105:443
🔐 tls     default/xwing /usr/bin/openssl 104.154.89.105:443  TLS1.2 TLS_DHE_RSA_WITH_AES_256_CBC_SHA
💥 exit    default/xwing /usr/bin/openssl s_client -connect badssl.com:443 -cipher DHE-RSA-AES256-SHA 0
🧹 close   default/xwing /usr/bin/openssl TCP 10.241.0.20:45918 => 104.154.89.105:443 tx 494 B rx 2.5 kB
🚀 process default/xwing /usr/bin/openssl s_client -connect badssl.com:443 -cipher DHE-RSA-AES256-SHA
🔌 connect default/xwing /usr/bin/openssl TCP 10.241.0.20:45526 => 104.154.89.105:443
🔐 tls     default/xwing /usr/bin/openssl 104.154.89.105:443  TLS1.2 TLS_DHE_RSA_WITH_AES_256_CBC_SHA
  • You can also view the above results in a tabular format by using ‘jq’:
kubectl exec -n kube-system daemonsets/hubble-enterprise -c enterprise -- \
  hubble-enterprise getevents -o json --pods xwing | jq -r '.tls.negotiated_version | select ( . != null)'

TLS1.2
TLS1.2
TLS1.2
File Enforcement

Tetragon Enterprise uses information about the kernel’s internal structures to provide file monitoring. To configure File Enforcement, apply an example File Enforcement Policy, which blocks any write access on /etc/passwd and /etc/shadow and blocks deleting these files. This policy applies to host files and all Kubernetes pods in the default Kubernetes namespace.

Note- Reach out to sales@isovalent.com or support@isovalent.com to get access to the specific file enforcement policies.

Using one of the already existing pods, you’ll see events from a test pod on the default Kubernetes namespace when trying to edit the /etc/passwd file: 

{"process_exec":{"process":{"exec_id":"YWtzLWF#####################3MwMDAwMDA6MTM5OT################jE0", "pid":212614, "uid":0, "cwd":"/", "binary":"/usr/bin/vi", "arguments":"/etc/passwd", "flags":"execve rootcwd clone", "start_time":"2023-12-05T10:49:10.927995393Z", "auid":4294967295, "pod":{"namespace":"default", "name":"xwing", "labels":["k8s:app.kubernetes.io/name=xwing", "k8s:class=xwing", "k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default", "k8s:io.cilium.k8s.policy.cluster=default", "k8s:io.cilium.k8s.policy.serviceaccount=default", "k8s:io.kubernetes.pod.namespace=default", "k8s:org=alliance"], "container":{"id":"containerd://af3a386beef49967b092cab401347861ff41b000719fa4d33d401a8b91e45393", "name":"spaceship", "image":{"id":"docker.io/tgraf/netperf@sha256:8e86f744bfea165fd4ce68caa05abc96500f40130b857773186401926af7e9e6", "name":"docker.io/tgraf/netperf:latest"}, "start_time":"2023-12-05T10:48:24Z", "pid":13}, "pod_labels":{"app.kubernetes.io/name":"xwing", "class":"xwing", "org":"alliance"}}, "docker":"af3a386beef49967b092cab40134786",
Process Visibility

Tetragon Enterprise uses information about the internal structures of the kernel to provide process visibility. Run kubectl to validate that Hubble Enterprise is configured with process visibility enabled:

kubectl exec -n kube-system ds/hubble-enterprise -c enterprise -- hubble-enterprise getevents

tLTExNjUxNTQxLXZtc3MwMDAwMDA6MTow", "tid":1}]}, "node_name":"aks-azurecilium-11651541-vmss000000", "time":"2023-12-07T08:41:10.972136694Z"}
{"process_exit":{"process":{"exec_id":"YWtzLWF6dXJlY2lsaXVtLTExNjUxNTQxLXZtc3MwMDAwMDA6MTc5MTE1OTEwMjc4NTk0OjM5MTg1OTE=", "pid":3918591, "uid":0, "cwd":"/", "binary":"/usr/sbin/iptables", "arguments":"-w 5 -W 100000 -S KUBE-KUBELET-CANARY -t mangle", "flags":"execve rootcwd clone", "start_time":"2023-12-07T08:41:10.972137794Z", "auid":4294967295, "parent_exec_id":"YWtzLWF6dXJlY2lsaXVtLTExNjUxNTQxLXZtc3MwMDAwMDA6NDI3MzA1MDAwMDAwMDoyNTc2", "tid":3918591}, "parent":{"exec_id":"YWtzLWF6dXJlY2lsaXVtLTExNjUxNTQxLXZtc3MwMDAwMDA6NDI3MzA1MDAwMDAwMDoyNTc2", "pid":2576, "uid":0, "binary":"/usr/local/bin/kubelet", "arguments":"--enable-server --node-labels=agentpool=azurecilium,kubernetes.azure.com/agentpool=azurecilium,agentpool=azurecilium,kubernetes.azure.com/agentpool=azurecilium,kubernetes.azure.com/cluster=MC_azurecilium_azurecilium_canadacentral,kubernetes.azure.com/consolidated-additional-properties=#############################,kubernetes.azure.com/ebpf-dataplane=cilium,kubernetes.azure.com/kubelet-identity-client-id=df40d83d-aff9-4abb-b884-76633171f1f7,kubernetes.azure.com/mode=system,kubernetes.azure.com/network-name=azurecilium-vnet,kubernetes.azure.com/network-resourcegroup=azurecilium,kubernetes.azure.com/network-subnet=azurecilium-subnet-node,kubernetes.azure.com/network-subscription==#############################,,kubernetes.azure.com/node-image-version=AKSUbuntu-2204gen2containerd-202311.07.0,kubernetes.azure.com/nodenetwork-vnetguid=68980e49-f4c1-4488-876d-135069bd6b5e,kubernetes.azure.com/nodepool-type=VirtualMachineScaleSets,kubernetes.azure.com/os-sku=Ubuntu,kubernetes.azure.com/podnetwork-delegationguid=68980e49-f4c1-4488-876d-135069bd6b5e,kubernetes.azure.com/podnetwork-name=azurecilium-vnet,kubernetes.azure.com/podnetwork-resourcegroup=azurecilium,kubernetes.azure.com/podnetwork-subnet=azurecilium-subnet-pods,kubernetes.azure.com/podnetwork-subscription==#############################,,kubernet", "flags":"procFS truncArgs auid nocwd rootcwd", "start_time":"2023-12-05T08:07:08.111858200Z", "auid":4294967295, "parent_exec_id":"YWtzLWF6dXJlY2lsaXVtLTExNjUxNTQxLXZtc3MwMDAwMDA6ODgwMDAwMDAwOjE=", "refcnt":2, "tid":2576}}, "node_name":"aks-azurecilium-11651541-vmss000000", "time":"2023-12-07T08:41:10.973181702Z"}
{"process_exit":{"process":{"exec_id":"YWtzLWF6dXJlY2lsaXVtLTExNjUxNTQxLXZtc3MwMDAwMDA6MTc5MTE1OTA5MDM2Mzg1OjM5MTg1OTA=", "pid":3918590, "uid":0, "cwd":"/", "binary":"/usr/sbin/ip6tables", "arguments":"-w 5 -W 100000 -S KUBE-KUBELET-CANARY -t mangle", "flags":"execve rootcwd clone", "start_time":"2023-12-07T08:41:10.970895485Z", "auid":4294967295, "parent_exec_id":"YWtzLWF6dXJlY2lsaXVtLTExNjUxNTQxLXZtc3MwMDAwMDA6NDI3MzA1MDAwMDAwMDoyNTc2", "tid":3918590}, "parent":{"exec_id":"YWtzLWF6dXJlY2lsaXVtLTExNjUxNTQxLXZtc3MwMDAwMDA6NDI3MzA1MDAwMDAwMDoyNTc2", "pid":2576, "uid":0, "binary":"/usr/local/bin/kubelet", "arguments":"--enable-server --node-labels=agentpool=azurecilium,kubernetes.azure.com/agentpool=azurecilium,agentpool=azurecilium,kubernetes.azure.com/agentpool=azurecilium,kubernetes.azure.com/cluster=MC_azurecilium_azurecilium_canadacentral,kubernetes.azure.com/consolidated-additional-properties=0250fe0f-9345-11ee-8cde-fa726702814e,kubernetes.azure.com/ebpf-dataplane=cilium,kubernetes.azure.com/kubelet-identity-client-id=df40d83d-aff9-4abb-b884-76633171f1f7,kubernetes.azure.com/mode=system,kubernetes.azure.com/network-name=azurecilium-vnet,kubernetes.azure.com/network-resourcegroup=azurecilium,kubernetes.azure.com/network-subnet=azurecilium-subnet-node,kubernetes.azure.com/network-subscription==#############################,,kubernetes.azure.com/node-image-version=AKSUbuntu-2204gen2containerd-202311.07.0,kubernetes.azure.com/nodenetwork-vnetguid=68980e49-f4c1-4488-876d-135069bd6b5e,kubernetes.azure.com/nodepool-type=VirtualMachineScaleSets,kubernetes.azure.com/os-sku=Ubuntu,kubernetes.azure.com/podnetwork-delegationguid=68980e49-f4c1-4488-876d-135069bd6b5e,kubernetes.azure.com/podnetwork-name=azurecilium-vnet,kubernetes.azure.com/podnetwork-resourcegroup=azurecilium,kubernetes.azure.com/podnetwork-subnet=azurecilium-subnet-pods,kubernetes.azure.com/podnetwork-subscription==#############################,,kubernet", "flags":"procFS truncArgs auid nocwd rootcwd", "start_time":"2023-12-05T08:07:08.111858200Z", "auid":4294967295, "parent_exec_id":"YWtzLWF6dXJlY2lsaXVtLTExNjUxNTQxLXZtc3MwMDAwMDA6ODgwMDAwMDAwOjE=", "refcnt":1, "tid":2576}}, "node_name":"aks-azurecilium-11651541-vmss000000", "time":"2023-12-07T08:41:10.972058194Z"}

Conclusion

Hopefully, this post gave you a good overview of how and why you would Deploy Isovalent Enterprise for Cilium in Azure Marketplace and enable end-users with Enterprise features. If you have any feedback on the solution, please share it with us. You’ll find us on the Cilium Slack channel.

Try it Out

Further Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Related

Blogs

All Azure Network Plugins lead to Cilium

This tutorial will outline how to upgrade your existing clusters in AKS using different network plugins to Azure CNI powered by Cilium. 

By
Amit Gupta
Blogs

Cilium in Azure Kubernetes Service (AKS)

In this tutorial, users will learn how to deploy Isovalent Enterprise for Cilium on your AKS cluster from Azure Marketplace on a new cluster and also upgrade an existing cluster from an AKS cluster running Azure CNI powered by Cilium to Isovalent Enterprise for Cilium.

By
Amit Gupta

Industry insights you won’t delete. Delivered to your inbox weekly.