Kubernetes changes the way we think about networking. In an ideal Kubernetes world, the network would be flat, and the Pod network would control all routing and security between the applications using Network Policies. In many Enterprise environments, though, the applications hosted on Kubernetes need to communicate with workloads outside the Kubernetes cluster, subject to connectivity constraints and security enforcement. Because of the nature of these networks, traditional firewalling usually relies on static IP addresses (or at least IP ranges). This can make it difficult to integrate a Kubernetes cluster, which has a varying and, at times, dynamic number of nodes, into such a network. Cilium’s Egress Gateway feature changes this by allowing you to specify which nodes should be used by a pod to reach the outside world. This blog post will walk you through deploying Cilium and Egress Gateway in AKS(Azure Kubernetes Service)using BYOCNI as the network plugin.
What is an Egress Gateway?
The egress gateway feature allows redirecting traffic originating in pods destined to specific CIDRs outside the cluster to be routed through particular nodes.
When the egress gateway feature is enabled and egress gateway policies are in place, packets leaving the cluster are masqueraded with selected, predictable IPs associated with the gateway nodes. This feature can be used with legacy firewalls to allow traffic to legacy infrastructure only from specific pods within a given namespace. These pods typically have ever-changing IP addresses. Even if masquerading were to be used to mitigate this, the IP addresses of nodes can also change frequently over time.
All pods that match the label=egress-node will be routed through the 192.168.11.4 IP when they reach out to any address in the 192.168.11.0/24 IP range, which is outside the cluster.
What is Isovalent Enterprise for Cilium?
Isovalent Enterprise for Cilium is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.
Why Isovalent Enterprise for Cilium?
While Egress Gateway in Cilium is a great step forward, most enterprise environments should not rely on a single point of failure for network routing. For this reason, Isovalent introduced Egress Gateway High Availability (HA), which supports multiple egress nodes. The nodes acting as egress gateways will then load-balance traffic in a round-robin fashion and provide fallback nodes in case one or more egress nodes fail.
The multiple egress nodes can be configured using a egressGroups parameter in the IsovalentEgressGatewayPolicy resource specification that we will detail in Scenario 2 in the tutorial below.
Pre-Requisites
The following prerequisites need to be taken into account before you proceed with this tutorial:
Azure CLI version 2.48.1 or later. Run az --version to see the currently installed version. If you need to install or upgrade, see Install Azure CLI.
If using ARM templates or the REST API, the AKS API version must be 2022-09-02-preview or later.
The kubectl command line tool is installed on your device. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is 1.26, you can use kubectl version 1.25, 1.26, or 1.27 with it. To install or upgrade kubectl, see Installing or updating kubectl.
You must remember certain limitations, which will be added over time.
The Egress gateway feature is partially incompatible with L7 policies.
Specifically, when an egress gateway policy and an L7 policy both select the same endpoint, traffic from that endpoint does not go through the egress gateway, even if the policy allows it.
Egress Gateway is incompatible with Isovalent’s Cluster Mesh feature.
Which network plugin can I use for Egress Gateway in AKS?
Considering two scenarios, we will create an Azure Kubernetes (AKS) Cluster with Bring Your Own CNI (BYOCNI) as the network plugin for this tutorial.
Scenario 1- Egress Gateways in a single Availability Zone.
Pre-Requisites:
The AKS cluster is created in VNET A, subnet A
The Egress Gateway is created in VNET A, subnet B
VNET= 192.168.8.0/22
Subnet A= 192.168.10.0/24
Subnet B= 192.168.11.0/24
A test VM is created in VNET A, subnet B
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
Replace SubscriptionName with your subscription name.
You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName
AKS Cluster creation
Create an AKS cluster with the network plugin as BYOCNI.
az group create -l eastus -n byocni
az network vnet create -g byocni --location canadacentral --name byocni-vnet --address-prefixes 192.168.8.0/22 -o none
az network vnet subnet create -g byocni --vnet-name byocni-vnet --name byocni-subnet --address-prefixes 192.168.10.0/24 -o none
az network vnet subnet create -g byocni --vnet-name byocni-vnet --name egressgw-subnet --address-prefixes 192.168.11.0/24 -o none
az aks create -l eastus -g byocni -n byocni --network-plugin none --vnet-subnet-id /subscriptions/#############################/resourceGroups/byocni/providers/Microsoft.Network/virtualNetworks/byocni-vnet/subnets/byocni-subnet
az aks get-credentials --resource-group byocni --name byocni
Note- You can also create an AKS cluster with BYOCNI using Terraform.
Create an unmanaged AKS nodepool in a different subnet.
Create an AKS nodepool in the egressgw-subnet (created in the previous step).
az aks nodepool add -g byocni --cluster-name byocni -n egressgw --enable-node-public-ip --node-count 1 --vnet-subnet-id /subscriptions/###############################/resourceGroups/byocni/providers/Microsoft.Network/virtualNetworks/byocni-vnet/subnets/egressgw-subnet
Assign a label to the unmanaged nodepool
Create a node pool with a label and specify a name for the --name parameters and labels for the --labels Parameter. Labels must be a key/value pair and have a valid syntax.
az aks nodepool update --resource-group byocni --cluster-name byocni --name egressgw --labels io.cilium/egress-gateway=true
Note- this doesn’t create a new NIC. It means traffic from the client pod is EGW-redirected to the egress-node: “true” node’s eth0 192.168.11.5, and from there, it’s also automatically NATed to the node’s assigned public IP.
The API provided by Isovalent to drive the Egress Gateway feature is the IsovalentEgressGatewayPolicy resource.
The selectors field of an IsovalentEgressGatewayPolicy resource is used to select source pods via a label selector. This can be done using matchLabels:
One or more destination CIDRs can be specified with destinationCIDRs:
destinationCIDRs:-"a.b.c.d/32"-"e.f.g.0/24"
The group of nodes that should act as gateway nodes for a given policy can be configured with the egressGroups field. Nodes are matched based on their labels, with the nodeSelector field:
Deploy a client pod and apply the IsovalentEgressGatewayPolicy, and observe that the pod’s connection gets redirected through the Gateway node.
The client pod gets deployed to one of the two nodes (managed), and the IEGP (Isovalent Egress Gateway Policy) selects one or both the nodes ( depending on the egress gateway IPs specified) as the Gateway node.
Create a VM in the same subnet as Egress Gateway and run a simple service on port 80 (like NGINX) that will respond to traffic sent from a pod on one of the worker nodes.
Test VM IP, in this case, is 192.168.11.4
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:22:48:3c:63:51 brd ff:ff:ff:ff:ff:ff
inet 192.168.11.4/24 brd 192.168.11.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::222:48ff:fe3c:6351/64 scope link valid_lft forever preferred_lft forever
Traffic Generation (towards the server in Egress GW subnet)
Send traffic toward the test VM.
kubectl exec busybox -- curl -I 192.168.11.4
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0612000000 --:--:-- --:--:-- --:--:-- 0HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)Date: Mon, 29 Apr 202412:44:07 GMT
Content-Type: text/html
Content-Length: 612Last-Modified: Mon, 29 Apr 2024 08:55:12 GMT
Connection: keep-alive
ETag: "662f6070-264"Accept-Ranges: byte
Traffic Generation (outside of the cluster towards the Internet)
Send traffic to a public service.
Note the IP it returns is the egress gateway node’s Public IP.
kubectl exec busybox -- curl ifconfig.me
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
1001310013001810 --:--:-- --:--:-- --:--:-- 18352.156.19.241
Take a tcpdump from one of the egress gateway nodes.
Install tcpdump on the egress gateway node via apt-get install tcpdump
As you can see 10.0.0.165 is the client-pod IP that the egress gateway node is receiving packets from and 192.168.11.5 is the egress gateway node’s eth0 IP address.
IP 34.117.118.44.80 >10.0.0.165.45468: Flags [S.], seq2883623483, ack 1878158107, win 65535, options [mss 1412,sackOK,TS val 1551427192 ecr 3654296066,nop,wscale 8], length 0IP 10.0.0.165.45468 >34.117.118.44.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 3654296076 ecr 1551427192], length 013:09:10.372213 IP 192.168.11.5.45468 >34.117.118.44.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 3654296076 ecr 1551427192], length 0IP 10.0.0.165.45468 >34.117.118.44.80: Flags [P.], seq1:76, ack 1, win 507, options [nop,nop,TS val 3654296077 ecr 1551427192], length 75: HTTP: GET / HTTP/1.1
13:09:10.372313 IP 192.168.11.5.45468 >34.117.118.44.80: Flags [P.], seq1:76, ack 1, win 507, options [nop,nop,TS val 3654296077 ecr 1551427192], length 75: HTTP: GET / HTTP/1.1
13:09:10.380927 IP 34.117.118.44.80 >192.168.11.5.45468: Flags [.], ack 76, win 256, options [nop,nop,TS val 1551427202 ecr 3654296077], length 0IP 34.117.118.44.80 >10.0.0.165.45468: Flags [.], ack 76, win 256, options [nop,nop,TS val 1551427202 ecr 3654296077], length 013:09:10.430485 IP 34.117.118.44.80 >192.168.11.5.45468: Flags [P.], seq1:183, ack 76, win 256, options [nop,nop,TS val 1551427251 ecr 3654296077], length 182: HTTP: HTTP/1.1 200 OK
IP 34.117.118.44.80 >10.0.0.165.45468: Flags [P.], seq1:183, ack 76, win 256, options [nop,nop,TS val 1551427251 ecr 3654296077], length 182: HTTP: HTTP/1.1 200 OK
IP 10.0.0.165.45468 >34.117.118.44.80: Flags [.], ack 183, win 506, options [nop,nop,TS val 3654296135 ecr 1551427251], length 013:09:10.430983 IP 192.168.11.5.45468 >34.117.118.44.80: Flags [.], ack 183, win 506, options [nop,nop,TS val 3654296135 ecr 1551427251], length 013:09:10.434396 IP 192.168.10.4.44693 >192.168.11.5.8472: OTV, flags [I](0x08), overlay 0, instance 53596IP 10.0.0.165.45468 >34.117.118.44.80: Flags [F.], seq76, ack 183, win 506, options [nop,nop,TS val 3654296139 ecr 1551427251], length 013:09:10.434476 IP 192.168.11.5.45468 >34.117.118.44.80: Flags [F.], seq76, ack 183, win 506, options [nop,nop,TS val 3654296139 ecr 1551427251], length 013:09:10.443468 IP 34.117.118.44.80 >192.168.11.5.45468: Flags [F.], seq183, ack 77, win 256, options [nop,nop,TS val 1551427264 ecr 3654296139], length 0IP 34.117.118.44.80 >10.0.0.165.45468: Flags [F.], seq183, ack 77, win 256, options [nop,nop,TS val 1551427264 ecr 3654296139], length 0IP 10.0.0.165.45468 >34.117.118.44.80: Flags [.], ack 184, win 506, options [nop,nop,TS val 3654296148 ecr 1551427264], length 013:09:10.443787 IP 192.168.11.5.45468 >34.117.118.44.80: Flags [.], ack 184, win 506, options [nop,nop,TS val 3654296148 ecr 1551427264], length 0
Scenario 2- Egress Gateways in a Multi-Availability Zone environment.
Geo redundancy across availability zones is a must, and combined with HA for the Egress GW, it is a solution that enterprises are always willing to consider.
Pre-Requisites:
The AKS cluster is created in VNET A, subnet A
The Egress Gateway is created in VNET A, subnet B
VNET= 192.168.8.0/22
Subnet A= 192.168.10.0/24
Subnet B= 192.168.11.0/24
A test VM is created in VNET A, subnet B
Set the subscription
Choose the subscription you want to use if you have multiple Azure subscriptions.
Replace SubscriptionName with your subscription name.
You can also use your subscription ID instead of your subscription name.
az account set --subscription SubscriptionName
AKS cluster creation with nodepools across AZ’s
Create an AKS cluster with the network plugin as BYOCNI and nodepools across different Availability Zones.
az group create -l eastus -n byocni
az network vnet create -g byocni --location canadacentral --name byocni-vnet --address-prefixes 192.168.8.0/22 -o none
az network vnet subnet create -g byocni --vnet-name byocni-vnet --name byocni-subnet --address-prefixes 192.168.10.0/24 -o none
az network vnet subnet create -g byocni --vnet-name byocni-vnet --name egressgw-subnet --address-prefixes 192.168.11.0/24 -o none
az aks create -l eastus -g byocni -n byocni --network-plugin none --vm-set-type VirtualMachineScaleSets --zones 12 --vnet-subnet-id /subscriptions/###############################/resourceGroups/byocni/providers/Microsoft.Network/virtualNetworks/byocni-vnet/subnets/byocni-subnet
az aks get-credentials --resource-group byocni --name byocni
Create an unmanaged AKS nodepool in a different subnet.
Create an AKS nodepool in the egressgw-subnet (created in the previous step).
az aks nodepool add -g byocni --cluster-name byocni -n egressgw --enable-node-public-ip --node-count 2 --vnet-subnet-id /subscriptions/#######################################/resourceGroups/byocni/providers/Microsoft.Network/virtualNetworks/byocni-vnet/subnets/egressgw-subnet --vm-set-type VirtualMachineScaleSets --zones 1 2
Assign a label to the unmanaged nodepool
Create a node pool with a label and specify a name for the --name parameters and labels for the --labels Parameter. Labels must be a key/value pair and have a valid syntax.
az aks nodepool update --resource-group byocni --cluster-name byocni --name egressgw --labels io.cilium/egress-gateway=true
Note- this doesn’t create a new NIC. It means traffic from the client pod is EGW-redirected to the egress-node: “true” node’s eth0 192.168.11.5 or 192.168.11.4, and from there, it’s also automatically NATed to the node assigned public IP.
Check that all nodes have been created in different Availability Zones.
Deploy a client pod and apply the IsovalentEgressGatewayPolicy, and observe that the pod’s connection gets redirected through the Gateway node.
The client pod gets deployed to one of the two nodes (managed), and the IEGP (Isovalent Egress Gateway Policy) selects one or both the nodes ( depending on the egress gateway IPs specified) as the Gateway node.
Create a VM in the same subnet as Egress Gateway and run a simple service on port 80 (like NGINX) that will respond to traffic sent from a pod on one of the worker nodes.
Test VM IP, in this case, is 192.168.11.4
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:22:48:3c:63:51 brd ff:ff:ff:ff:ff:ff
inet 192.168.11.4/24 brd 192.168.11.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::222:48ff:fe3c:6351/64 scope link valid_lft forever preferred_lft forever
Traffic Generation (towards the server in Egress GW subnet)
Send traffic toward the test VM.
kubectl exec busybox -- curl -I 192.168.11.4
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0612000000 --:--:-- --:--:-- --:--:-- 0HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)Date: Mon, 29 Apr 202412:44:07 GMT
Content-Type: text/html
Content-Length: 612Last-Modified: Mon, 29 Apr 2024 08:55:12 GMT
Connection: keep-alive
ETag: "662f6070-264"Accept-Ranges: byte
Traffic Generation (outside of the cluster towards the Internet)
Send traffic to a public service.
Note the IP it returns is the egress gateway node’s Public IP.
kubectl exec busybox -- curl ifconfig.me
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
1001310013001610 --:--:-- --:--:-- --:--:-- 1624.172.200.224
kubectl exec busybox -- curl ifconfig.me
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
1001310013001550 --:--:-- --:--:-- --:--:-- 15620.63.116.127
Take a tcpdump from one of the egress gateway nodes.
Install tcpdump on the egress gateway node via apt-get install tcpdump.
As you can see 10.0.0.225 is the client-pod IP that the egress gateway node is receiving packets from and 192.168.11.5 is the egress gateway node’s eth0 IP address.
08:15:22.713376 IP 168.63.129.16.53 >192.168.11.5.34878: 425941/0/1 A 34.117.118.44 (56)IP 168.63.129.16.53 >10.0.2.21.34878: 425941/0/1 A 34.117.118.44 (56)IP 10.0.0.225.42814 >34.117.118.44.80: Flags [S], seq102756168, win 64860, options [mss 1410,sackOK,TS val 2905874533 ecr 0,nop,wscale 7], length 008:15:22.716722 IP 192.168.11.5.42814 >34.117.118.44.80: Flags [S], seq102756168, win 64860, options [mss 1410,sackOK,TS val 2905874533 ecr 0,nop,wscale 7], length 008:15:22.725284 IP 34.117.118.44.80 >192.168.11.5.42814: Flags [S.], seq1883155317, ack 102756169, win 65535, options [mss 1412,sackOK,TS val 3464590701 ecr 2905874533,nop,wscale 8], length 0IP 34.117.118.44.80 >10.0.0.225.42814: Flags [S.], seq1883155317, ack 102756169, win 65535, options [mss 1412,sackOK,TS val 3464590701 ecr 2905874533,nop,wscale 8], length 0IP 10.0.0.225.42814 >34.117.118.44.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 2905874545 ecr 3464590701], length 008:15:22.727367 IP 192.168.11.5.42814 >34.117.118.44.80: Flags [.], ack 1, win 507, options [nop,nop,TS val 2905874545 ecr 3464590701], length 0IP 10.0.0.225.42814 >34.117.118.44.80: Flags [P.], seq1:76, ack 1, win 507, options [nop,nop,TS val 2905874545 ecr 3464590701], length 75: HTTP: GET / HTTP/1.1
08:15:22.727389 IP 192.168.11.5.42814 >34.117.118.44.80: Flags [P.], seq1:76, ack 1, win 507, options [nop,nop,TS val 2905874545 ecr 3464590701], length 75: HTTP: GET / HTTP/1.1
08:15:22.735433 IP 34.117.118.44.80 >192.168.11.5.42814: Flags [.], ack 76, win 256, options [nop,nop,TS val 3464590712 ecr 2905874545], length 0IP 34.117.118.44.80 >10.0.0.225.42814: Flags [.], ack 76, win 256, options [nop,nop,TS val 3464590712 ecr 2905874545], length 008:15:22.765735 IP 34.117.118.44.80 >192.168.11.5.42814: Flags [P.], seq1:183, ack 76, win 256, options [nop,nop,TS val 3464590742 ecr 2905874545], length 182: HTTP: HTTP/1.1 200 OK
IP 34.117.118.44.80 >10.0.0.225.42814: Flags [P.], seq1:183, ack 76, win 256, options [nop,nop,TS val 3464590742 ecr 2905874545], length 182: HTTP: HTTP/1.1 200 OK
IP 10.0.0.225.42814 >34.117.118.44.80: Flags [.], ack 183, win 506, options [nop,nop,TS val 2905874585 ecr 3464590742], length 0IP 10.0.0.225.42814 >34.117.118.44.80: Flags [F.], seq76, ack 183, win 506, options [nop,nop,TS val 2905874585 ecr 3464590742], length 008:15:22.768788 IP 192.168.11.5.42814 >34.117.118.44.80: Flags [.], ack 183, win 506, options [nop,nop,TS val 2905874585 ecr 3464590742], length 008:15:22.768857 IP 192.168.11.5.42814 >34.117.118.44.80: Flags [F.], seq76, ack 183, win 506, options [nop,nop,TS val 2905874585 ecr 3464590742], length 008:15:22.777039 IP 34.117.118.44.80 >192.168.11.5.42814: Flags [F.], seq183, ack 77, win 256, options [nop,nop,TS val 3464590753 ecr 2905874585], length 0IP 34.117.118.44.80 >10.0.0.225.42814: Flags [F.], seq183, ack 77, win 256, options [nop,nop,TS val 3464590753 ecr 2905874585], length 0IP 10.0.0.225.42814 >34.117.118.44.80: Flags [.], ack 184, win 506, options [nop,nop,TS val 2905874596 ecr 3464590753], length 008:15:22.778466 IP 192.168.11.5.42814 >34.117.118.44.80: Flags [.], ack 184, win 506, options [nop,nop,TS val 2905874596 ecr 3464590753], length 0
Availability Zone Affinity
It is possible to control the AZ affinity of the egress gateway traffic with azAffinity. This feature relies on the well-known, topology.kubernetes.io/zone node label to match or prefer gateway nodes within the same AZ of the source pods (“local” gateways) based on the configured mode of operation.
The following modes of operation are available:
disabled: This mode uses all the active gateways available, regardless of their AZ. This is the default mode of operation.
By taking a tcpdump from both the egress nodes, we can see that the traffic flows across both the egress nodes.
localOnly: This mode selects only local gateways. If no local gateways are available, traffic will not pass through the non-local gateways and will be dropped.
By taking a tcpdump from both the egress nodes, we can see that the traffic flows across one of the local gateways.
localOnlyFirst: This mode selects only local gateways as long as at least one is available in a given AZ. When no more local gateways are available, non-local gateways will be selected.
By taking a tcpdump from both the egress nodes, we can see that the traffic flows across one of the local gateways.
localPriority: this mode selects all gateways, but local gateways are picked up first. In conjunction with maxGatewayNodes, this can prioritize local gateways over non-local ones, allowing for a graceful fallback to non-local gateways in case the local ones become unavailable.
By taking a tcpdump from both the egress nodes, we can see that the traffic flows across one of the local gateways.
Note- Isovalent support does not approve this solution because the changes will be lost if the nodes are rebooted or upgraded. Users must route the packets for the respective IP addresses they add to the egress gateway node.
An AKS cluster in BYOCNI mode with managed and unmanaged nodepools is created with a single NIC.
This thus limits the capability for users not to associate more IP addresses if they have more outbound connections to servers/databases spread across multiple subnets or within the same subnet.
AKS doesn’t allow the creation of more than 1 NIC on a nodepool, but you can add more IPs on the existing NIC and thus solve this issue to some extent and use either the label interface:ethX or egressIP: x.x.x.x.
This is limited to 254 IP addresses per NIC.
# Specify the IP address used for egress.# It must exist as an IP associated with a network interface on the instance. egressIP: 10.100.255.50
-
# Specify the node or set of nodes that should be part of this egress group.# When 'interface' is specified this node selector can target multiple nodes. nodeSelector:
matchLabels:
node.kubernetes.io/pool: wg-2
topology.kubernetes.io/zone: canadacentral
# Specify the interface to be used for egress traffic.# A single IP address is expected to be associated with this interface in each node. interface: eth1
Update the existing nodepool where the egress gateway has been created (in either scenario 1 or scenario 2)
Once the update goes through, based on the host OS on the nodepool, you need to add the respective IP addresses on the host.
In this case, it was Ubuntu 22.04, for which we will use netplan for OS network management.
Verify that the changes are in place.
root@aks-egressgw-27814974-vmss000000:/# ip a | grep eth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 inet 192.168.11.4/24 metric 100 brd 192.168.11.255 scope global eth0
inet 192.168.11.100/24 brd 192.168.11.255 scope global secondary eth0
You can now add egress gateway policies for the new IP addresses that have been added and scale the solution.
Common questions for Egress Gateway?
How is the traffic encapsulated from the worker node to the egress node?
The traffic is encapsulated from a worker node to an egress node regardless of the tunnel mode, and in this case, the AKS cluster with BYOCNI uses VXLAN as the encapsulation.
How can you find the identity of the source endpoint if the traffic is encapsulated?
VNI of the VXLAN header equals the Identity of a source endpoint. In this case, the VNI maps to 53596
You can then track the identity using the Cilium CLI, which indicates that it’s the busybox Pod.
Hopefully, this post gave you a good overview of deploying Cilium and Egress Gateway in AKS (Azure Kubernetes Service) using BYOCNI as the network plugin. If you have any feedback on the solution, please share it with us. Talk to us, and let’s see how Isovalent can help with your use case.
Amit Gupta is a senior technical marketing engineer at Isovalent, powering eBPF cloud-native networking and security. Amit has 21+ years of experience in Networking, Telecommunications, Cloud, Security, and Open-Source. He has previously worked with Motorola, Juniper, Avi Networks (acquired by VMware), and Prosimo. He is keen to learn and try out new technologies that aid in solving day-to-day problems for operators and customers.
He has worked in the Indian start-up ecosystem for a long time and helps new folks in that area outside of work. Amit is an avid runner and cyclist and also spends considerable time helping kids in orphanages.
In this tutorial, users will learn how to deploy Isovalent Enterprise for Cilium on your AKS cluster from Azure Marketplace on a new cluster and also upgrade an existing cluster from an AKS cluster running Azure CNI powered by Cilium to Isovalent Enterprise for Cilium.
In this tutorial, you will learn how to enable Enterprise features (Layer-3, 4 & 7 policies, DNS-based policies, and observe the Network Flows using Hubble-CLI) in an Azure Kubernetes Service (AKS) cluster running Isovalent Enterprise for Cilium.
By
Amit Gupta
Industry insights you won’t delete. Delivered to your inbox weekly.