Cilium is available and supported across a number of Kubernetes platforms and offerings. One of the most popular is that of Red Hat OpenShift. We’ve previously covered why Cilium and Red Hat OpenShift are a match made in heaven, supercharging our clusters capabilities out of the box. If you want to learn how to deploy Red Hat OpenShift with Cilium, follow this tutorial!
Why deploy Red Hat OpenShift with Cilium?
By introducing Cilium’s eBPF powered networking, security and observability features to our Red Hat OpenShift clusters, application developers and Site Reliability Engineers gain access to granular application metrics and insights into the behavior of their applications, and SecOps teams can transparently encrypt traffic, ensuring a secure communication layer across multiple clusters and cloud environments. These are just a few examples of how bringing these two technologies together provide a robust infrastructure for managing applications within the OpenShift environment.
Cilium has been available in the Red Hat Ecosystem Catalog since 2021, as well as being certified as a Certified OpenShift CNI Plug-in. The Container Network Interface (CNI) badge is a specialization within Red Hat OpenShift certification available to networking products that integrate with OpenShift using a CNI plug-in. Users running OpenShift can feel confident that running Cilium will not negatively impact their Red Hat support experience.
The OpenShift certified version of Cilium is based on Red Hat Universal Base Images and passes the Operator certification requirements as well as the Kubernetes e2e tests.
Who supports me when I use Cilium on OpenShift?
As Cilium has earned the Red Hat OpenShift Container Network Interface (CNI) certification by completing the operator certification and passing end-to-end testing, Red Hat will collaborate as needed with the ecosystem partner to troubleshoot the issue, as per their third-party software support statements.
For customers who have chosen Isovalent Enterprise for Cilium, this ensures a complete support experience with Cilium by both vendors, working towards a common resolution.
How do I install Cilium on OpenShift?
Red Hat OpenShift Container Platform (OCP) offers three main flexible installation methods:
- Installer provisioned infrastructure (IPI) – Deploy a cluster on provisioned infrastructure and the cluster it maintains.
- User provisioned infrastructure (UPI) – Deploy a cluster on infrastructure that we prepare and maintain.
- There is a third method, which is Agent-based, providing the flexbility of UPI, driven by the Assisted Installer (AI) tool.
In this tutorial, we will be performing an IPI installation deployed to a VMware vSphere platform, however the steps today to consume the Cilium Manifests during a cluster deployment also apply to a UPI installation.
I recommend becoming familar with the Red Hat OpenShift deployment documentation for our provided platform, as well as the Cilium Documentation for installation to Red Hat OpenShift.
We can check the support matrix for Red Hat OpenShift and Cilium versions here.
Prerequisites
Before we get started with the installation itself, we will need to get the following pre-requsites in place.
- Jump host to run the installation software from
- We can download the
openshift-install
tool andoc
tool from this Red Hat repository - Alternatively we can go to Red Hat Hybrid Cloud Console > Click OpenShift > Clusters > Create a cluster > Select our platform. This will give us download links to the software and our pull secret
- We can download the
- A pull secret file/key from the Red Hat Cloud Console website
- We can get one of these by just signing up for an account, any cluster created using this key will get a trial activation for a cluster for 60 days
- Access to the DNS server which supports the infrastructure platform that we are deploying to
- A SSH Key used for access to the deployed OpenShift nodes
Extract the software tools and place them in our user location.
Next, we need to download the VMware vCenter trusted root certificates and import them to our Jump Host.
Now unzip the file (we may need to install a software package for this sudo apt install unzip
), and import them to the trusted store (Ubuntu uses the .crt files, hence importing the win folder).
We will need a user account to connect to vCenter with the correct permissions for the OpenShift-Install
tool to deploy the vSphere cluster. If we do not want to use an existing account and permissions, we can use this PowerCLI script as an example to create the roles with the correct privileges based on the Red Hat documentation (at the time of writing).
Finally we will need a copy of the Cilium Operator Lifecycle Manager (OLM) manifest files.
- Cilium OSS OLM files hosted by Isovalent
- Cilium Enterprise OLM Files hosted by Isovalent
- Go to the docs page (accessible for customers), navigate to the “Installing Isovalent Cilium Enterprise on OpenShift” pages for the appropriate links.
DNS
For an OpenShift 4.13 IPI vSphere installation, we will need DNS and DHCP available for the cluster.
- For the OpenShift 4.13 IPI installation, we need to configure two static DNS addresses. One for the cluster api access
api.{clustername}.{basedomain}
and one for cluster ingress access*.apps.{clustername}.{basedomain}
.- In this tutorial I will be using the following:
- Base Domain – isovalent.rocks
- Cluster Name – ocp413
- Full example – api.ocp413.isovalent.rocks
- In this tutorial I will be using the following:
- We will also need to create reverse lookup records for these addresses as well
- These two addresses need to be part of the same subnet as our DHCP scope but excluded from DHCP.
Create the OpenShift Installation manifests
Now that we have our pre-reqs in place, we can start to deploy our cluster. When using the OpenShift-Install
tool, we have three main command line options when creating a cluster:
openshift-install create cluster
- This will run through a wizard to create the install-config.yaml file and then create the cluster automatically using terraform. Terraform is packaged as part of the installer software , meaning we don’t need terraform on our system as a pre-req.
- If we run the below two commands listed, we can then still run this command to provision our cluster in an IPI method.
- If we only use this command to provision a cluster, we will skip the necessary steps to bootstrap the cluster with the Cilium CNI.
openshift-install create install-config
- This will run through a wizard to create the
install-config.yaml
file, and leave it in the root directory, or directory we’ve specify with the--dir={location}
argument. - Modifications can be made to the
install-config.yaml
file, before running the abovecreate cluster
command.
openshift-install create manifests
- This will create the manifests folder which controls the provisioning of the cluster. Most of the time this command is only used with UPI installations. When deploying Cilium, we are required to create this manifest folder and add the additional Cilium YAML files to this folder. This provides OpenShift the additional configuration files to bootstrap the cluster with Cilium.
First we will create the install-config.yaml
, the easiest way to do this is by running the below command and answering the questions as part of the wizard :
This will output a file called install-config.yaml
, which contains all the infrastructure information that the tool will provision, in addition to the associated cloud provider, in this example, VMware vSphere.
The file contents will be similar to the below.
Whichever way we create the file, we will need to edit this file to ensure that networkType
is set to Cilium
, and the Network Address CIDRs are configured as necessary for our environment.
Now run the following command to create the OpenShift manifests file:
Next we need to copy across the Cilium manifests into this folder.
For Cilium OSS wecan run the following commands, which downloads the repo to /tmp
, then copies the manifests for our configured Cilium version into the manifests folder, and removes the repo from /tmp
:
For Isovalent Enterprise for Cilium OLM files, the method is the same once we’ve downloaded the files.
OpenShift will by default create a cluster-network-02-operator.yml
file. Within this file, the networkType should be set to Cilium and all relevant clusterNetwork and serviceNetwork CIDRs should be defined, as per the configuration from the install-config.yaml
file.
To configure Cilium, we can modify the cluster-network-07-cilium-ciliumconfig.yaml
file. The below example shows the configuration for Hubble Metrics and Prometheus Service Monitors enabled:
It is also possible to update the configuration values once the cluster is running by changing the CiliumConfig
object, e.g. with kubectl edit ciliumconfig -n cilium cilium
. We may need to restart the Cilium agent pods for certain options to take effect.
How do I create the OpenShift Cluster with Cilium?
We are now ready to create the OpenShift Cluster and can proceed by running the command:
For more granular output we can use the following command argument for output to the terminal.
Full debug information is also contained in the hidden file .openshift_install.log
in the location where the command is run from.
Below is an example output from creating the cluster with the informational log level set.
How do I test Network Connectivity with Cilium?
For this guide the final piece is to run the standard Cilium tests on my cluster to confirm network connectivity.
We’ve run the provided export KUBECONFIG
in the terminal output to connect to our cluster.
Before the Cilium network tests will run, due to the SecurityConstraintContext (SCC) that’s implemented by OpenShift out of the box as a hardened security posture, we will need to apply a configuration to handle this.
Now we can proceed to configure the tests to run, first create a namespace for the test to run:
Deploy the checks with the command:
This will configure a series of deployments that will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:
Note: If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending
state. This is expected since these pods need at least 2 nodes to be scheduled successfully.
Summary
This tutorial takes you as far as the initial deployment of Red Hat OpenShift and Cilium including network connectivity testing. To continue to run this cluster further for development or production use-cases, I recommend continuing to follow the official Red Hat OpenShift documentation covering post installation cluster tasks. You can also join the Cilium Slack workspace to join in the discussion for all things Cilium related.
As you can see, it’s quick and easy to install Cilium into our Red Hat OpenShift environment, with minimal additional configuration from our existing workflows.
With Cilium installed in our cluster, we can now take advantage of features and use-cases such as:
- Advanced Network Policy
- Cilium Cluster Mesh
- BGP Support
- Gateway API
- Advanced Network Protocol Visibility
- Transparent Encryption
Why not jump into our hands-on-Labs which take you through the various features of Cilium, Hubble and Tetragon. We recommend get started by diving into our three popular Isovalent Enterprise for Cilium labs covering Zero Trust Networking, Security Observability and Connectivity Visibility.
Isovalent Enterprise for Cilium: Zero Trust Visibility
Creating the right Network Policies can be difficult. In this lab, you will use Hubble metrics to build a Network Policy Verdict dashboard in Grafana showing which flows need to be allowed in your policy approach.
Start LabIsovalent Enterprise for Cilium: Security Visibility
Learn how to imulate the exploitation of a nodejs application, with a reverse shell inside of a container and monitor the lateral movement within the Kubernetes environment.
Start LabIsovalent Enterprise for Cilium: Connectivity Visibility with Hubble
Learn how Hubble Flow events provide metadata-aware, DNS-aware, and API-aware visibility for network connectivity within a Kubernetes environment using Hubble CLI, Hubble UI and Hubble Timescape, which provides historical data for troubleshooting.
Start LabDean Lewis is a Senior Technical Marketing Engineer at Isovalent – the company behind the open-source cloud native solution Cilium.
Dean had a varied background working in the technology fields, from support to operations to architectural design and delivery at IT Solutions Providers based in the UK, before moving to VMware and focusing on cloud management and cloud native, which remains as his primary focus. You can find Dean in the past and present speaking at various Technology User Groups and Industry Conferences, as well as his personal blog.