Back to blog

Topology Aware Routing and Service Mesh across Clusters with Cluster Mesh

Raymond de Jong
Raymond de Jong
Published: Updated: Cilium
Cluster Mesh Blog

Introduction

While multi-cloud was once derided as a very bad practice, it has become embraced by the vast majority of organizations. And while many organizations accidentally fall into a multi-cloud strategy, many of them deliberately elect to leverage multiple clouds to provide resilience against cloud failures. 

But deploying highly available cloud native applications across multiple clusters can quickly become laborious. Challenges operators face include:

  • How can I connect clusters together when clouds leverage vastly different network tools?
  • How can I visualize traffic across multiple clouds? 
  • How do I ensure consistent security policies when standard Kubernetes policies are tied to a cluster?
  • How do I load-balance the traffic across multiple clouds?
  • How do I ensure traffic is encrypted as it crosses the public network between clouds?

At the recent eBPF Day preceding KubeCon Europe 2022,  Karsten Nielsen, Senior Systems Engineer at IKEA IT, described setting up a mesh of cluster as “quite a bit of work”. This is quite the understatement.

In this blog post we will re-introduce to you Cilium Cluster Mesh and explain how it provides a single networking, security and observability solution for your applications spanning multiple clusters.

We will describe the latest features of Cilium Cluster Mesh such as Topology Aware Routing and how these features can be used in combination with Cilium Service Mesh to provide high available applications which span across multiple clusters on-prem or in the cloud.

Overview

Cilium Cluster Mesh provides the ability to extend the networking datapath across multiple clusters. These can be both clusters in the cloud or on-premises. Cluster Mesh then allows all endpoints in connected clusters to communicate while providing full policy enforcement and observability. 

Cluster Mesh provides solutions in multiple areas for multi-cloud environments. Depending on your requirements and use cases you can use one or multiple capabilities to use with Cluster Mesh. Let’s quickly go through the current capabilities of Cilium Cluster Mesh before we dive into the main topic of the blog.

Service Mesh

Cilium provides an easy-to-manage, sidecar-less, Service Mesh solution for your applications. Combined with Cluster Mesh this enables you to extend your Service Mesh across clusters for improved traffic management, availability and observability of your applications.

Service Discovery & Load Balancing

High availability for your application is provided through the configuration of simple standard Kubernetes Services configured with annotations to effectively load balance traffic to endpoints across multiple clusters. Cilium automatically discovers endpoints across your cluster and load balances traffic between them.

Identity Aware Security and Observability

Network policy enforcement can be applied on workloads spanning multiple clusters. Policies can be specified as Kubernetes NetworkPolicy resource or the extended CiliumNetworkPolicy CRD.

Cilium automatically assigns identities based on pods metadata which in turn is being used to secure and observe your applications in an effective way without the need of tracking IP addresses.

Encryption

Cilium Cluster Mesh supports configuration of transparent encryption for all communication between nodes in the local cluster as well as across cluster boundaries.

Routing and Overlay Networking

Pod IP routing is achieved across multiple Kubernetes clusters at native performance via tunneling or direct-routing without requiring any gateways or proxies. Additional topology-aware routing and traffic engineering capabilities can be configured across clusters using simple annotations in standard Kubernetes Services. 

Architecture

The control plane of Cluster Mesh is based on etcd and kept as minimalistic as possible. The Cluster Mesh API Server contains an etcd instance to keep track of the cluster’s state. The state from multiple clusters is never mixed.

Cilium agents running in other clusters connect to the Cluster Mesh API Server to watch for changes and replicate the multi-cluster state into their own cluster. Access to the Cluster Mesh API Server is protected using TLS certificates. 

Access from one cluster into another is always read-only. This ensures failure domains remain unchanged. A failure in one cluster never propagates into other clusters.

Global Services

The global service discovery of Cilium’s multi-cluster model is built using standard Kubernetes services and designed to be completely transparent to existing Kubernetes application deployments.

Cilium monitors Kubernetes services and endpoints and watches for services with an annotation io.cilium/global-service: "true"

For such services, all services with identical name and namespace information are automatically merged together and form a global service that is available across clusters.

apiVersion: v1
kind: Service
metadata:
  name: rebel-base
  annotations:
    io.cilium/global-service: "true"
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    name: rebel-base

Any traffic to a ClusterIP of a global service will automatically be load-balanced to endpoints in all clusters based on the standard Kubernetes health-checking logic.

Topology Aware Routing

The Cilium 1.12 release introduced new Cluster Mesh features which provide capabilities for topology aware routing by using simple annotations. While load balancing traffic to endpoints across multiple Kubernetes clusters, it is now possible to configure local or remote service affinity.

One of the main use cases for using Cluster Mesh is High Availability. This use case includes operating Kubernetes clusters in multiple regions or availability zones and running the replicas of the same services in each cluster. Traffic will be by default load balanced to all endpoints across clusters. When all endpoints in a given cluster fail, traffic will be forwarded to the remaining endpoints in other clusters.

Local Service Affinity

Using the new Local Service Affinity annotation it is now possible to load balance traffic only to local endpoints unless the local endpoints aren’t available after which traffic will be sent to endpoints in remote clusters.

This feature optimizes connectivity for your applications and reduces cross-cluster traffic improving performance and reducing latency.

Configuration of a Global Service in Cluster Mesh is done using the annotation io.cilium/global-service=true

The configuration of Local Service affinity is done using the annotation io.cilium/service-affinity: local

apiVersion: v1
kind: Service
metadata:
  name: rebel-base
  annotations:
    io.cilium/global-service: "true"
    io.cilium/service-affinity: local
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    name: rebel-base

Remote Service Affinity

Using the Remote Service Affinity feature, you can create a service that instead prefers load-balancing traffic to remote endpoints in other clusters. 

A common use case here is for operations where you need to temporarily forward traffic to remote clusters to be able to update or lifecycle application deployments in your local cluster and avoid any unavailability of your application.

The configuration of Remote Service affinity is done by adding the annotation io.cilium/service-affinity: remote to a given Service.

apiVersion: v1
kind: Service
metadata:
  name: rebel-base
  annotations:
    io.cilium/global-service: "true"
    io.cilium/service-affinity: remote
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    name: rebel-base

When configuring the annotation io.cilium/service-affinity: none there will be no service affinity and traffic will be load balanced to all endpoints across clusters. 

Combining Service Mesh and Cluster Mesh 

Cilium Service Mesh introduces a service mesh solution which runs completely without usage of sidecars while supporting various control plane options. The goal of Service Mesh is to reduce latency, complexity and CPU and memory overhead in the service mesh layer. 

Powered with Cilium Cluster Mesh your Service Mesh solution can now span across your clusters without added complexity.

Using Ingress for Multi-Cluster Load Balancing

Ingress resources can be configured in different clusters to attract traffic into your clusters while having the benefit of using Cluster Mesh to provide high availability of your applications across multiple clusters.

Depending on your requirements you can use the local or remote service affinity while forwarding traffic and avoiding any downtime with canary rollouts.

Canary Example 1: 25% / 75% Traffic Split Across Clusters with Failover

Another example would be where Cilium Service Mesh provides L7 traffic management configured for path-based or percentage-based routing to introduce traffic to a new version of a given application. Combined with Cilium Cluster Mesh you can build high-availability of these services across clusters. 

Canary Example 2: Redirecting 1% of traffic to different cluster

While using Cluster Mesh powered Service Mesh it is even possible to do canary rollouts to another cluster. In case there is a new version available for a given application, this version of the application can be deployed in different cluster to avoid resource contention and to test the new application and monitor its performance while gradually introducing more traffic to the new version of the application using Cilium Service Mesh.

When using topology-aware routing with local or remote service affinity you can deploy highly available and low latency applications using both Cilium Service Mesh and Cluster Mesh.

Security & Observability

Using Cilium, each deployed pod in a Kubernetes cluster is assigned an identity based on its set of labels. Pods deployed with the same set of labels through for example a Deployment or ReplicaSet are assigned the same identity as they have the same set of labels. Using these identities there is no need anymore to track IP addresses. When a pod connects to another pod, the identity is attached to IP packets by the Cilium agent using eBPF, which in turn is used to verify the identity at the destination pod before being allowed to connect.

Using Cluster Mesh, each cluster must be configured with a unique cluster name and cluster-ID. The cluster ID is a selected integer between 0 to 255 which ensures a unique range of available identities is configured for each cluster avoiding overlapping identities between clusters. 

Each cluster learns about other cluster workloads and their associated identities which provides identity-aware security to span across multiple clusters using Cilium Network Policies and observability in each cluster using Hubble.

Multi-Cluster Cilium Network Policies

The unique identities learned across clusters allow us to configure Cilium Network Policies which only allow traffic from specific pods from specific clusters.

In the example above the x-wing pods on cluster-1 are allowed to connect to rebel-base pods running in cluster-2. However, x-wing pods on cluster-2 are not allowed to connect to rebel-base pods running in cluster-1.

Below is an example of a Cilium Network Policy that can be applied in such configuration.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "ingress-to-rebel-base"
spec:
  description: "Allow x-wing in cluster-1 to contact rebel-base in cluster2"
  endpointSelector:
    matchLabels:
      name: rebel-base
      io.cilium.k8s.policy.cluster: cluster-2
  ingress:
  - fromEndpoints:
    - matchLabels:
        name: x-wing
        io.cilium.k8s.policy.cluster: cluster-1
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP

Observability

Securing workloads on Kubernetes clusters is very hard without any observability tools. When having workloads running across clusters this problem becomes exponentially worse. The required observability with Cilium is available by Hubble which has knowledge about all workloads and their identities and is able to provide rich context about all the network flows between workloads. 

As shown in the screenshot above taken from the Hubble UI we can identify flows based on source and destination identities and are able to see which cluster they are running on. 

Conclusion

The latest Cilium Cluster Mesh features, such as topology aware routing, provides high-available, more operational flexible and improved performance for applications which span across multiple clusters. 

Cilium Service Mesh powered with Cilium Cluster Mesh provides even more flexibility for running your applications across multiple clusters on-prem or in the cloud. The examples described in this blog are just a few examples of how you could design your applications with Service Mesh. Depending on the applications and their related services the combination of both Cluster Mesh and Service features will provide flexibility and scalability capabilities across your clouds. 

The Cilium Identity Aware Security and Observability feature provides you with the ability to easily Observe and Secure your applications independent of where they are located. 

If you would like to know more about Cluster Mesh or want to see it in action, how about you run our interactive lab?

New call-to-action

Also, you can test Cluster Mesh yourself by following our getting started guides, and we would love to hear your feedback on the Cilium Slack Channel.

Further Material & Resources

Raymond de Jong
AuthorRaymond de JongField CTO

Industry insights you won’t delete. Delivered to your inbox weekly.