Last updated: April 2023
Gateway API support has landed on Cilium! Gateway API is the long term replacement for Kubernetes Ingress and provides operators a role-based, portable and extensible model to route traffic into your clusters.
In this blog post, we will walk you through how to install, configure and manage the Cilium Gateway API for a number of use cases. If you would like to understand the “what” and the “why” of the Cilium Gateway API, head over to that post.
This post will focus on the “how”.
If you would rather do than read, head to the Cilium Gateway API lab and Cilium Advanced Gateway API Use Cases lab instead.
Installing Cilium with Gateway API
In our environment, we start with a kind-based Kubernetes cluster. You can use a configuration such as simple as this one below (note the disableDefaultCNI: true
as we will be installing Cilium instead of the default CNI):
Save this configuration as kind-config.yaml
and deploy it (for example, with kind create cluster --image kindest/node:v1.24.0
--config kind-config.yaml
).
Before we install Cilium with Gateway API, we need to make sure we install the Gateway API CRDs prior to the Cilium install (Gateway API is a Custom Resource Definition (CRD) based API so you’ll need to install the CRDs onto a cluster to use the API):
You can check the CRDs have been installed with the following command:
We can now go ahead and deploy Cilium. We’ve talked about the many ways to deploy Cilium in a previous post. This time, let’s use Helm. Notice the main requirement for Gateway API to work: Cilium must be configured with kubeProxyReplacement
set to partial
or strict
.
Let’s double check it was successfully set up:
To access the service that will be exposed via the Gateway API, we need to allocate an external IP address. When a Gateway is created, an associated Kubernetes Services of the type LoadBalancer is created. When using a managed Kubernetes Service like EKS, AKS or GKE, the LoadBalancer is assigned an IP (or DNS name) automatically. For private cloud or for home labs, we need another tool – such as MetalLB below – to allocate an IP Address and to provide L2 connectivity (as in – advertising to said IP to the network with gratuitous ARPs). Note that Cilium itself provides Load-Balancer IP Address Management support but not Layer 2 connectivity (that is a work-in-progress).
What is a GatewayClass and a Gateway?
Before we actually start routing traffic into the cluster, we should explain what these CRDs were and why they were required.
If the CRDs have been deployed beforehand, a GatewayClass will be deployed by Cilium during its installation (assuming the Gateway API option has been selected).
Let’s verify that a GatewayClass has been deployed and accepted:
The GatewayClass
is a type of Gateway that can be deployed: in other words, it is a template. This is done in a way to let infrastructure providers offer different types of Gateways. Users can then choose the Gateway they like.
For instance, an infrastructure provider may create two GatewayClasses
named internet
and private
for two different purposes and possibly with different features: one to proxy Internet-facing services and one for private internal applications.
In our case, we will instantiate Cilium Gateway API (io.cilium/gateway-controller
).
HTTP Routing
Let’s now deploy an application and set up GatewayAPI HTTPRoutes
to route HTTP traffic into the cluster. We will use bookinfo as a sample application.
This demo set of microservices provided by the Istio project consists of several deployments and services:
- 🔍 details
- ⭐ ratings
- ✍ reviews
- 📕 productpage
We will use several of these services as bases for our Gateway APIs.
Deploy an application
Let’s deploy the sample application in the cluster.
Check that the application is properly deployed:
You should see multiple pods being deployed in the default
namespace.
Notice that with Cilium Service Mesh, there is no Envoy sidecar created alongside each of the demo app microservices. With a sidecar implementation, the output would show 2/2 READY
: one for the microservice and one for the Envoy sidecar.
Have a quick look at the Services deployed:
Note these Services are only internal-facing (ClusterIP
) and therefore there is no access from outside the cluster to these Services.
Deploy the Gateway and the HTTPRoutes
Before deploying the Gateway and HTTPRoutes, let’s review the configuration we’re going to use. Let’s review it section by section:
First, note in the Gateway
section that the gatewayClassName
field uses the value cilium
. This refers to the Cilium GatewayClass
previously configured.
The Gateway will listen on port 80
for HTTP traffic coming southbound into the cluster.
The allowedRoutes
is here to specify the namespaces from which Routes may be attached to this Gateway. Same
means only Routes in the same namespace may be used by this Gateway.
Note that, if we were to use All
instead of Same
, we would enable this gateway to be associated with routes in any namespace and it would enable us to use a single gateway across multiple namespaces that may be managed by different teams.
We could specify different namespaces in the HTTPRoutes
– therefore, for example, you could send the traffic to https://acme.com/payments
in a namespace where a payment app is deployed and https://acme.com/ads
in a namespace used by the ads team for their application.
Let’s now review the HTTPRoute
manifest. HTTPRoute
is a GatewayAPI type for specifiying routing behaviour of HTTP requests from a Gateway listener to a Kubernetes Service.
It is made of Rules to direct the traffic based on your requirements.
This first Rule is essentially a simple L7 proxy route: for HTTP traffic with a path starting with /details
, forward the traffic over to the details
Service over port 9080.
The second rule is similar but it’s leveraging different matching criteria.
If the HTTP request has:
- a HTTP header with a name set to
magic
with a value offoo
, AND - the HTTP method is “GET”, AND
- the HTTP query param is named
great
with a value ofexample
,
Then the traffic will be sent to theproductpage
service over port 9080.
As you can see, you can deploy sophisticated L7 traffic rules that are consistent. With Ingress API, annotations were often required to achieve such routing goals and that created inconsistencies from one Ingress controller to another.
One of the benefits of these new APIs is that the Gateway API is essentially split into separate functions – one to describe the Gateway and one for the Routes to the back-end services. By splitting these two functions, it gives operators the ability to change and swap gateways but keep the same routing configuration.
In other words: if you decide you want to use a different Gateway API controller instead, you will be able to re-use the same manifest.
Let’s now deploy the Gateway and the HTTPRoute:
Let’s now look at the Service created by the Gateway:
You will see a LoadBalancer
service named cilium-gateway-my-gateway
which was created for the Gateway API. MetalLB will automatically provision an IP address for it.
The same external IP address is also associated to the Gateway:
Let’s retrieve this IP address:
HTTP Path matching
Let’s now check that traffic based on the URL path is proxied by the Gateway API.
Because of the path starts with /details
, this traffic will match the first rule and will be proxied to the details
Service over port 9080. The curl
request is successful:
If you enable Hubble (either during the Cilium installation or later on with cilium hubble enable
), you can track the flows of this particular HTTP transaction. Note how you can filter flows based on the HTTP Path.
HTTP Header Matching
This time, we will route traffic based on HTTP parameters like header values, method and query parameters.
With curl
, we can specify the headers values and query parameters to match this particular rule above:
The curl
query is successful and returns a successful 200
code and a verbose HTML reply.
Likewise, Hubble lets you visualize and filter traffic based on some HTTP values, such as the method (GET
, POST
, etc…) or the status code (200
, 404
, etc…):
TLS Termination
While routing HTTP traffic was easy to understand, secure workloads will require the use of HTTPS and TLS certificates. Let’s start this walkthrough with first deploying the certificate.
For demonstration purposes we will use a TLS certificate signed by a made-up, self-signed certificate authority (CA). One easy way to do this is with mkcert
.
First, let’s create a certificate that will validate bookinfo.cilium.rocks
and hipstershop.cilium.rocks
, as these are the host names used in this Gateway example:
Mkcert created a key (_wildcard.cilium.rocks-key.pem
) and a certificate (_wildcard.cilium.rocks.pem
) that we will use for the Gateway service.
Let’s now create a Kubernetes TLS secret with this key and certificate:
We can now deploy another Gateway for HTTPS Traffic:
Let’s review the configuration. It is almost identical to the one we reviewed previously. Just notice the following in the Gateway manifest:
And the following in the HTTPRoute manifest:
The HTTPS Gateway API examples build up on what was done in the HTTP example and adds TLS termination for two HTTP routes:
- the
/details
prefix will be routed to thedetails
HTTP service deployed earlier - the
/
prefix will be routed to theproductpage
HTTP service deployed earlier
These services will be secured via TLS and accessible on two domain names:
bookinfo.cilium.rocks
hipstershop.cilium.rocks
In our example, the Gateway serves the TLS certificate defined in the demo-cert
Secret resource for all requests to bookinfo.cilium.rocks
and to hipstershop.cilium.rocks
.
After the deployment, the Gateway will pick up and IP address from MetalLB:
In this Gateway configuration, the host names hipstershop.cilium.rocks
and bookinfo.cilium.rocks
are specified in the path routing rules.
Since we do not have DNS entries for these names, we will modify the /etc/hosts
file on the host to manually associate these names to the known Gateway IP we retrieved.
Requests to these names will now be directed to the Gateway.
Let’s install the Mkcert CA into your system so cURL can trust it:
Let’s now make a HTTPS request to the Gateway:
The data was be properly retrieved, using HTTPS (and thus, the TLS handshake was properly achieved).
Traffic Splitting
For this particular use case, we’re going to use Gateway API to load-balance incoming traffic to different backends, with different weights associated.
We will use a deployment made of echo servers – they will reply to our curl
queries with the pod name and the node name.
Let’s now deploy the Gateway and the HTTPRoute:
The HTTPRoute
Rules includes two different backendRefs and weights associated with them
Access is successful and as described above, we get, in the reply, the pod name and the node name:
When repeating the curl, the traffic is split roughly between two services. To verify, we can run a loop and count how the replies are spread:
Update the HTTPRoute
weights (either by updating the value in the original manifest before reapplying it or by using kubectl edit httproute
) to, for example, 99 for echo-1
and 1 for echo-2
:
Running the same loop validates that traffic is split based on the new weights:
HTTP Request Header Modification
With this functionality, the Cilium Gateway API lets us add, remove or edit HTTP Headers of incoming traffic.
This is best validated by trying without and with the functionality. We’ll use the same echo servers.
Let’s add another HTTPRoute
(we can use the Gateway created in any of the previous HTTP-related use cases):
A curl is successful and the reply sent back from the echo server is the original HTTP Header:
To add the header, we will be using the filters
field. Add this to the HTTPRoute
rules spec (on the same indentation as matches
and backendRefs
) and re-apply it.
When running the curl
test again, notice how the header has been added:
To remove a header, you can add the following fields:
Notice how the x-request-id
header has been removed:
HTTP Response Header Modification
Just like editing request headers can be useful, the same goes for response headers. For example, it allows teams to add/remove cookies for only a certain backend, which can help in identifying certain users that were redirected to that backend previously.
Another potential use case could be when you have a frontend that needs to know whether it’s talking to a stable or a beta version of the backend server, in order to render different UI or adapt its response parsing accordingly.
At time of writing, this feature is currently only included in the “Experimental” channel of Gateway API. Therefore, before using it, we need to deploy the experimental Gateway CRDs.
Find more information about the experimental features on the Gateway API website.
Let’s review the HTTPRoute we will be using to modify the response headers:
Notice how this time, the header’s response is modified, using the type: ResponseHeaderModifier
filter. We are going to add 3 headers in one go.
Apply the HTTPRoute
manifest above and start making HTTP requests to the Gateway address (we’re using the same one as before). Remember that the body of the response packet includes details about the original request. Note that no specific header was added to the original request.
To show the headers of the response, we can run curl
in verbose mode:
You should see, in the fields starting with <
, the response header. You should see the headers added by the Gateway API to the response:
Again, you can see how simple it is to use Cilium Gateway API to modify HTTP traffic – incoming requests or outgoing responses.
Conclusion
We hope you found this deep dive into Gateway API use cases useful. As more features are added to the Gateway API, we will update this post periodically with more features.
To learn more:
- Try the Gateway API lab and the Advanced Gateway API lab
- Watch the Cilium Gateway API YouTube Playlist
- Read the companion blog post
- Join Cilium Slack to interact with the Cilium community
Thanks for reading.
Prior to joining Isovalent, Nico worked in many different roles—operations and support, design and architecture, and technical pre-sales—at companies such as HashiCorp, VMware, and Cisco.
In his current role, Nico focuses primarily on creating content to make networking a more approachable field and regularly speaks at events like KubeCon, VMworld, and Cisco Live.