Back to blog

Tutorial: Deploying Red Hat OpenShift with Cilium

Dean Lewis
Dean Lewis
Published: Updated: Cilium
Deploying Red Hat OpenShift with Cilium

Cilium is available and supported across a number of Kubernetes platforms and offerings. One of the most popular is that of Red Hat OpenShift. We’ve previously covered why Cilium and Red Hat OpenShift are a match made in heaven, supercharging our clusters capabilities out of the box. If you want to learn how to deploy Red Hat OpenShift with Cilium, follow this tutorial!

Why deploy Red Hat OpenShift with Cilium?

By introducing Cilium’s eBPF powered networking, security and observability features to our Red Hat OpenShift clusters, application developers and Site Reliability Engineers gain access to granular application metrics and insights into the behavior of their applications, and SecOps teams can transparently encrypt traffic, ensuring a secure communication layer across multiple clusters and cloud environments. These are just a few examples of how bringing these two technologies together provide a robust infrastructure for managing applications within the OpenShift environment.

Red Hat OpenShift - Cilium Features

Cilium has been available in the Red Hat Ecosystem Catalog since 2021, as well as being certified as a Certified OpenShift CNI Plug-in. The Container Network Interface (CNI) badge is a specialization within Red Hat OpenShift certification available to networking products that integrate with OpenShift using a CNI plug-in. Users running OpenShift can feel confident that running Cilium will not negatively impact their Red Hat support experience.

The OpenShift certified version of Cilium is based on Red Hat Universal Base Images and passes the Operator certification requirements as well as the Kubernetes e2e tests.

Who supports me when I use Cilium on OpenShift?

As Cilium has earned the Red Hat OpenShift Container Network Interface (CNI) certification by completing the operator certification and passing end-to-end testing, Red Hat will collaborate as needed with the ecosystem partner to troubleshoot the issue, as per their third-party software support statements.

For customers who have chosen Isovalent Enterprise for Cilium, this ensures a complete support experience with Cilium by both vendors, working towards a common resolution.

How do I install Cilium on OpenShift?

Red Hat OpenShift Container Platform (OCP) offers three main flexible installation methods:

  • Installer provisioned infrastructure (IPI) – Deploy a cluster on provisioned infrastructure and the cluster it maintains.
  • User provisioned infrastructure (UPI) – Deploy a cluster on infrastructure that we prepare and maintain.
  • There is a third method, which is Agent-based, providing the flexbility of UPI, driven by the Assisted Installer (AI) tool.

In this tutorial, we will be performing an IPI installation deployed to a VMware vSphere platform, however the steps today to consume the Cilium Manifests during a cluster deployment also apply to a UPI installation.

I recommend becoming familar with the Red Hat OpenShift deployment documentation for our provided platform, as well as the Cilium Documentation for installation to Red Hat OpenShift.

We can check the support matrix for Red Hat OpenShift and Cilium versions here.

Prerequisites

Before we get started with the installation itself, we will need to get the following pre-requsites in place.

  • Jump host to run the installation software from
    • We can download the openshift-install tool and oc tool from this Red Hat repository
    • Alternatively we can go to  Red Hat Hybrid Cloud Console > Click OpenShift > Clusters > Create a cluster > Select our platform. This will give us download links to the software and our pull secret
  • A pull secret file/key from the Red Hat Cloud Console website
    • We can get one of these by just signing up for an account, any cluster created using this key will get a trial activation for a cluster for 60 days
  • Access to the DNS server which supports the infrastructure platform that we are deploying to
  • A SSH Key used for access to the deployed OpenShift nodes

Extract the software tools and place them in our user location.

tar -zxvf openshift-client-linux-{version}.tar.gz
tar -zxvf openshift-install-linux-{version}.tar.gz

sudo cp openshift-install /usr/bin/local/openshift-install
sudo cp oc /usr/bin/local/oc
sudo cp kubectl /usr/bin/local/kubectl

Next, we need to download the VMware vCenter trusted root certificates and import them to our Jump Host.

curl -O https://{vCenter_FQDN}/certs/download.zip

Now unzip the file (we may need to install a software package for this sudo apt install unzip), and import them to the trusted store (Ubuntu uses the .crt files, hence importing the win folder).

unzip download.zip
cp certs/win/* /usr/local/share/ca-certificates
update-ca-certificates

We will need a user account to connect to vCenter with the correct permissions for the OpenShift-Install tool to deploy the vSphere cluster. If we do not want to use an existing account and permissions, we can use this PowerCLI script as an example to create the roles with the correct privileges based on the Red Hat documentation (at the time of writing).

Finally we will need a copy of the Cilium Operator Lifecycle Manager (OLM) manifest files.

  • Cilium OSS OLM files hosted by Isovalent
  • Cilium Enterprise OLM Files hosted by Isovalent
    • Go to the docs page (accessible for customers), navigate to the “Installing Isovalent Cilium Enterprise on OpenShift” pages for the appropriate links.

DNS

For an OpenShift 4.13 IPI vSphere installation, we will need DNS and DHCP available for the cluster.

  • For the OpenShift 4.13 IPI installation, we need to configure two static DNS addresses. One for the cluster api access api.{clustername}.{basedomain} and one for cluster ingress access *.apps.{clustername}.{basedomain}.
    • In this tutorial I will be using the following:
      • Base Domain – isovalent.rocks
      • Cluster Name – ocp413
      • Full example – api.ocp413.isovalent.rocks
  • We will also need to create reverse lookup records for these addresses as well
  • These two addresses need to be part of the same subnet as our DHCP scope but excluded from DHCP.

Create the OpenShift Installation manifests

Now that we have our pre-reqs in place, we can start to deploy our cluster. When using the OpenShift-Install tool, we have three main command line options when creating a cluster:

openshift-install create cluster

  • This will run through a wizard to create the install-config.yaml file and then create the cluster automatically using terraform. Terraform is packaged as part of the installer software , meaning we don’t need terraform on our system as a pre-req.
  • If we run the below two commands listed, we can then still run this command to provision our cluster in an IPI method.
  • If we only use this command to provision a cluster, we will skip the necessary steps to bootstrap the cluster with the Cilium CNI.

openshift-install create install-config

  • This will run through a wizard to create the install-config.yaml file, and leave it in the root directory, or directory we’ve specify with the --dir={location} argument.
  • Modifications can be made to the install-config.yaml file, before running the above create cluster command.

openshift-install create manifests

  • This will create the manifests folder which controls the provisioning of the cluster. Most of the time this command is only used with UPI installations. When deploying Cilium, we are required to create this manifest folder and add the additional Cilium YAML files to this folder. This provides OpenShift the additional configuration files to bootstrap the cluster with Cilium.

First we will create the install-config.yaml, the easiest way to do this is by running the below command and answering the questions as part of the wizard :

$ openshift-install create install-config
? SSH Public Key /home/dean/.ssh/ocp413.pub
? Platform vsphere
? vCenter vcenter.isovalent.rocks
? Username administrator@vsphere.local
? Password [? for help] *********
INFO Connecting to vCenter vvcenter.isovalent.rocks    
INFO Defaulting to only available datacenter: Datacenter 
INFO Defaulting to only available cluster: Cluster 
? Default Datastore Datastore
INFO Defaulting to only available network: Network 1
? Virtual IP Address for API 192.168.200.142
? Virtual IP Address for Ingress 192.168.200.143
? Base Domain isovalent.rocks
? Cluster Name ocp413
? Pull Secret [? for help] ***************************************************************************************************
INFO Install-Config created in .

This will output a file called install-config.yaml, which contains all the infrastructure information that the tool will provision, in addition to the associated cloud provider, in this example, VMware vSphere.

The file contents will be similar to the below.

Whichever way we create the file, we will need to edit this file to ensure that networkType is set to Cilium, and the Network Address CIDRs are configured as necessary for our environment.

additionalTrustBundlePolicy: Proxyonly
apiVersion: v1
baseDomain: isovalent.rocks
compute:
- architecture: amd64
  hyperthreading: Enabled
  name: worker
  platform: {}
  replicas: 3
controlPlane:
  architecture: amd64
  hyperthreading: Enabled
  name: master
  platform: {}
  replicas: 3
metadata:
  creationTimestamp: null
  name: ocp413
networking:
  clusterNetwork:
  - cidr: 10.244.0.0/16
    hostPrefix: 23
  machineNetwork:
  - cidr: 10.0.0.0/16
  networkType: Cilium
  serviceNetwork:
  - 172.30.0.0/16
platform:
  vsphere:
    apiVIPs:
    - 192.168.200.142
    failureDomains:
    - name: generated-failure-domain
      region: generated-region
      server: vcenter.isovalent.rocks
      topology:
        computeCluster: /Datacenter/host/Cluster
        datacenter: Datacenter
        datastore: /Datacenter/datastore/vSANDatastore
        networks:
        - Network1
        resourcePool: /Datacenter/host/Cluster1/Resources/Compute-ResourcePool/openshift/
      zone: generated-zone
    ingressVIPs:
    - 192.168.200.143
    vcenters:
    - datacenters:
      - Datacenter
      password: Isovalent.Rocks1!
      port: 443
      server: vcenter.isovalent.rocks
      user: administrator@vsphere.local
publish: External
pullSecret: '{"auths":{"cloud.openshift.com":.......}
sshKey: |
  ssh-rsa ...... isovalent@rocks

Now run the following command to create the OpenShift manifests file:

$ openshift-install create manifests

Next we need to copy across the Cilium manifests into this folder.

For Cilium OSS wecan run the following commands, which downloads the repo to /tmp, then copies the manifests for our configured Cilium version into the manifests folder, and removes the repo from /tmp:

$ cilium_version="1.14.3"
$ git_dir="/tmp/cilium-olm"
$ git clone https://github.com/isovalent/olm-for-cilium.git ${git_dir}
$ cp ${git_dir}/manifests/cilium.v${cilium_version}/* "manifests/"
$ test -d ${git_dir} && rm -rf -- ${git_dir}

# Test the Cilium files have populated the manifests folder
$ ls manifests/cluster-network-*-cilium-*

For Isovalent Enterprise for Cilium OLM files, the method is the same once we’ve downloaded the files.

OpenShift will by default create a cluster-network-02-operator.yml file. Within this file, the networkType should be set to Cilium and all relevant clusterNetwork and serviceNetwork CIDRs should be defined, as per the configuration from the install-config.yaml file.

To configure Cilium, we can modify the cluster-network-07-cilium-ciliumconfig.yaml file. The below example shows the configuration for Hubble Metrics and Prometheus Service Monitors enabled:

apiVersion: cilium.io/v1alpha1
kind: CiliumConfig
metadata:
  name: cilium
  namespace: cilium
spec:
  sessionAffinity: true
  securityContext:
    privileged: true
  kubeProxyReplacement: strict
  k8sServiceHost: api.ocp-test.simon.local
  k8sServicePort: 6443
  ipam:
    mode: "cluster-pool"
    operator:
      clusterPoolIPv4PodCIDRList: "10.244.0.0/16"
      clusterPoolIPv4MaskSize: 24
  cni:
    binPath: "/var/lib/cni/bin"
    confPath: "/var/run/multus/cni/net.d"
    exclusive: false
    customConf: false
  prometheus:
    enabled: true
    serviceMonitor: {enabled: true}
  nodeinit:
    enabled: true
  extraConfig:
    bpf-lb-sock-hostns-only: "true"
    export-aggregation: "connection"
    export-aggregation-ignore-source-port: "false"
    export-aggregation-state-filter: "new closed established error"
  hubble:
    enabled: true
    metrics:
      enabled:
      - dns:labelsContext=source_namespace,destination_namespace
      - drop:labelsContext=source_namespace,destination_namespace
      - tcp:labelsContext=source_namespace,destination_namespace
      - icmp:labelsContext=source_namespace,destination_namespace
      - port-distribution
      - flow:labelsContext=source_namespace,destination_namespace;sourceContext=workload-name|reserved-identity;destinationContext=workload-name|reserved-identity
      - "kafka:labelsContext=source_namespace,source_workload,destination_namespace,destination_workload,traffic_direction;sourceContext=workload-name|reserved-identity;destinationContext=workload-name|reserved-identity"
      - "httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction;sourceContext=workload-name|reserved-identity;destinationContext=workload-name|reserved-identity"
      serviceMonitor: {enabled: true}
    relay: {enabled: true}
  operator:
    unmanagedPodWatcher:
      restart: false
    metrics:
      enabled: true
    prometheus:
      enabled: true
      serviceMonitor: {enabled: true}

It is also possible to update the configuration values once the cluster is running by changing the CiliumConfig object, e.g. with kubectl edit ciliumconfig -n cilium cilium. We may need to restart the Cilium agent pods for certain options to take effect.

How do I create the OpenShift Cluster with Cilium?

We are now ready to create the OpenShift Cluster and can proceed by running the command:

$ openshift-install create cluster

For more granular output we can use the following command argument for output to the terminal.

--log-level string   log level (e.g. "debug | info | warn | error") (default "info")

Full debug information is also contained in the hidden file .openshift_install.log in the location where the command is run from.

Below is an example output from creating the cluster with the informational log level set.

INFO Consuming Openshift Manifests from target directory 
INFO Consuming Worker Machines from target directory 
INFO Consuming Master Machines from target directory 
INFO Consuming Common Manifests from target directory 
INFO Consuming OpenShift Install (Manifests) from target directory 
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/prod/streams/4.13-9.2/builds/413.92.202307260246-0/x86_64/rhcos-413.92.202307260246-0-vmware.x86_64.ova?sha256=4b2caacc4d5dc69aabe3733a86e0a5ac0b41bbe1c090034c4fa33faf582a0476' 
INFO The file was found in cache: /home/dean/.cache/openshift-installer/image_cache/rhcos-413.92.202307260246-0-vmware.x86_64.ova. Reusing... 
INFO Creating infrastructure resources...         
INFO Waiting up to 20m0s (until 4:07PM) for the Kubernetes API at https://api.ocp413.isovalent.rocks:6443... 
INFO API v1.26.9+636f2be up                       
INFO Waiting up to 1h0m0s (until 4:49PM) for bootstrapping to complete... 
INFO Destroying the bootstrap resources...        
INFO Waiting up to 40m0s (until 4:43PM) for the cluster at https://api.ocp413.isovalent.rocks:6443 to initialize... 
INFO Checking to see if there is a route at openshift-console/console... 
INFO Install complete!                            
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/dean/ocp413/auth/kubeconfig' 
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp413.isovalent.rocks 
INFO Login to the console with user: "kubeadmin", and password: "DyquU-FckQQ-CpN9g-7A57f" 
INFO Time elapsed: 30m33s

How do I test Network Connectivity with Cilium?

For this guide the final piece is to run the standard Cilium tests on my cluster to confirm network connectivity.

We’ve run the provided export KUBECONFIG in the terminal output to connect to our cluster.

Before the Cilium network tests will run, due to the SecurityConstraintContext (SCC) that’s implemented by OpenShift out of the box as a hardened security posture, we will need to apply a configuration to handle this.

$ kubectl apply -f - <<EOF
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  name: cilium-test
allowHostPorts: true
allowHostNetwork: true
users:
  - system:serviceaccount:cilium-test:default
priority: null
readOnlyRootFilesystem: false
runAsUser:
  type: MustRunAsRange
seLinuxContext:
  type: MustRunAs
volumes: null
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostPID: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities: null
defaultAddCapabilities: null
requiredDropCapabilities: null
groups: null
EOF

Now we can proceed to configure the tests to run, first create a namespace for the test to run:

$ kubectl create ns cilium-test

Deploy the checks with the command:

$ kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/1.14.3/examples/kubernetes/connectivity-check/connectivity-check.yaml

This will configure a series of deployments that will use various connectivity paths to connect to each other. Connectivity paths include with and without service load-balancing and various network policy combinations. The pod name indicates the connectivity variant and the readiness and liveness gate indicates success or failure of the test:

$ kubectl get pods -n cilium-test
NAME                                                    READY   STATUS    RESTARTS   AGE
echo-a-568cb98744-tlvwv                                 1/1     Running   0          67s
echo-b-64db4dfd5d-q8kpj                                 1/1     Running   0          67s
echo-b-host-6b7bb88666-qnhz4                            1/1     Running   0          67s
host-to-b-multi-node-clusterip-6cfc94d779-5v2x7         1/1     Running   0          66s
host-to-b-multi-node-headless-5458c6bff-2v7m7           1/1     Running   0          66s
pod-to-a-allowed-cnp-55cb67b5c5-ltclc                   1/1     Running   0          66s
pod-to-a-c9b8bf6f7-z4k2h                                1/1     Running   0          66s
pod-to-a-denied-cnp-85fb9df657-ndg2n                    1/1     Running   0          66s
pod-to-b-intra-node-nodeport-55784cc5c9-t42kj           1/1     Running   0          66s
pod-to-b-multi-node-clusterip-5c46dd6677-jgzvf          1/1     Running   0          66s
pod-to-b-multi-node-headless-748dfc6fd7-2ggvq           1/1     Running   0          66s
pod-to-b-multi-node-nodeport-f6464499f-84t92            1/1     Running   0          66s
pod-to-external-1111-96c489555-srcvb                    1/1     Running   0          66s
pod-to-external-fqdn-allow-google-cnp-5f747dfc7-8jsxw   1/1     Running   0          66s

Note: If you deploy the connectivity check to a single node cluster, pods that check multi-node functionalities will remain in the Pending state. This is expected since these pods need at least 2 nodes to be scheduled successfully.

Summary

This tutorial takes you as far as the initial deployment of Red Hat OpenShift and Cilium including network connectivity testing. To continue to run this cluster further for development or production use-cases, I recommend continuing to follow the official Red Hat OpenShift documentation covering post installation cluster tasks. You can also join the Cilium Slack workspace to join in the discussion for all things Cilium related.

As you can see, it’s quick and easy to install Cilium into our Red Hat OpenShift environment, with minimal additional configuration from our existing workflows.

With Cilium installed in our cluster, we can now take advantage of features and use-cases such as:

Why not jump into our hands-on-Labs which take you through the various features of Cilium, Hubble and Tetragon. We recommend get started by diving into our three popular Isovalent Enterprise for Cilium labs covering Zero Trust Networking, Security Observability and Connectivity Visibility.

Isovalent Enterprise for Cilium: Zero Trust Visibility

Creating the right Network Policies can be difficult. In this lab, you will use Hubble metrics to build a Network Policy Verdict dashboard in Grafana showing which flows need to be allowed in your policy approach.

Start Lab

Isovalent Enterprise for Cilium: Security Visibility

Learn how to imulate the exploitation of a nodejs application, with a reverse shell inside of a container and monitor the lateral movement within the Kubernetes environment.

Start Lab

Isovalent Enterprise for Cilium: Connectivity Visibility with Hubble

Learn how Hubble Flow events provide metadata-aware, DNS-aware, and API-aware visibility for network connectivity within a Kubernetes environment using Hubble CLI, Hubble UI and Hubble Timescape, which provides historical data for troubleshooting.

Start Lab
Dean Lewis
AuthorDean LewisSenior Technical Marketing Engineer

Related

Videos

How to supercharge Red Hat OpenShift with eBPF using Cilium

[54:56] In this video, Thomas Graf (Isovalent CTO and Co-Founder and co-creator of Cilium) and Brandon Jozsa (Associate Principal SA at Red Hat) present the core concepts of eBPF and Cilium and why and how you might want to use it on your Red Hat OpenShift Environment.

By
Thomas Graf
Labs

Isovalent Enterprise for Cilium: Security Visibility

In this scenario, we are going to simulate the exploitation of a nodejs application, with the attacker spawning a reverse shell inside of a container and moving laterally within the Kubernetes environment.   We will demonstrate how the combined Process and Network Event Data: identify the suspicious Late Process Execution tie the suspicious processes to a randomly generated External Domain Name trace the Lateral Movement and Data Exfiltration of the attacker post-exploit

Industry insights you won’t delete. Delivered to your inbox weekly.