Back to blog

Isovalent, Red Hat Openshift Service on AWS (ROSA) come together.

Amit Gupta
Amit Gupta
Published: Updated: Isovalent
Isovalent, Red Hat Openshift Service on AWS (ROSA) come together.

Kubernetes doesn’t provide a network interface system by default. Instead, network plugins offer this functionality, with Cilium leading the way. Red Hat OpenShift suggests several alternate CNI plugins. Red Hat OpenShift has an extensive certification program to ensure CNI’s work correctly with their enterprise platform, allowing for the deployment of 3rd Party Kubernetes CNI’s such as Cilium. Isovalent Enterprise for Cilium meets all these requirements and has become the de facto standard for customers looking to run Kubernetes in production. A highly requested feature from our customers using Red Hat OpenShift Service on AWS (ROSA) is the ability to choose a CNI that covers the three pillars of Networking, Security, and Observability. Today, we are happy to announce that Isovalent Enterprise for Cilium is fully supported as a CNI for ROSA. This blog shows how to deploy a Red Hat Openshift Service on an AWS (ROSA) cluster without a preinstalled CNI plugin and then add Isovalent Enterprise for Cilium as the CNI plugin.
Note– The blog has been validated on ROSA/HCP 4.17 and Isovalent Enterprise for Cilium 1.15.

What is ROSA?

ROSA is a fully-managed Red Hat OpenShift Service running natively on Amazon Web Services (AWS), providing a Kubernetes-based turnkey application platform.

ROSA allows organizations to increase operational efficiency, refocus on innovation, and quickly build, deploy, and scale applications.

With ROSA, everything you need to deploy and manage applications is bundled, including container management, automation (Operators), monitoring, and more, all backed by expert Red Hat site reliability engineers (SREs). ROSA provides you with benefits, including:

  • Accelerate time to value– Focus on building and scaling applications that add value to the business
  • Focus on Innovation-Simplify operations so your teams can refocus on innovation, not managing infrastructure
  • Optimize Investment-Take advantage of current cloud investments and entitlements with AWS
  • Hybrid cloud flexibility-Get consistent Red Hat OpenShift experience across any environment: public cloud, private cloud, edge

What are the types of cluster topologies in ROSA?

ROSA has the following cluster topologies:

  • Hosted control plane (HCP) – The control plane is hosted within Red Hat’s AWS account and managed by Red Hat. Worker nodes are deployed in the customer’s AWS account.
  • Classic – The control plane and worker nodes are deployed in the customer’s AWS account.

ROSA with HCP offers a more efficient control plane architecture that helps reduce the AWS infrastructure fees incurred when running ROSA and allows for faster cluster creation times. ROSA with HCP and ROSA Classic can be enabled in the ROSA console. You can select which architecture you want to use when provisioning ROSA clusters using the ROSA CLI.

In this blog, we will be talking about the Hosted Control Plane.

What is Isovalent Enterprise for Cilium?

Isovalent Enterprise for Cilium is an enterprise-grade, hardened distribution of open-source projects Cilium, Hubble, and Tetragon, built and supported by the Cilium creators. Cilium enhances networking and security at the network layer, while Hubble ensures thorough network observability and tracing. Tetragon ties it all together with runtime enforcement and security observability, offering a well-rounded solution for connectivity, compliance, multi-cloud, and security concerns.

Why Isovalent Enterprise for Cilium for ROSA?

Red Hat OpenShift Service on AWS (ROSA) offers a seamless hybrid cloud experience by combining Red Hat OpenShift’s enterprise Kubernetes capabilities with AWS’s infrastructure. Isovalent Enterprise for Cilium elevates this integration by addressing critical network, security, and observability challenges, ensuring ROSA users maximize the value of their clusters.

  • Enhanced Networking for ROSA Workloads– ROSA provides the foundational compute stack necessary for application deployment, but Isovalent Enterprise for Cilium takes it further by leveraging eBPF-powered Cilium to optimize network performance. This ensures low-latency communication and efficient routing, even in high-throughput or latency-sensitive applications.
  • Simplified Multi-Cluster Networking with ROSA– Unified communication across clusters becomes essential as organizations scale their ROSA environments. Cilium’s Cluster Mesh simplifies inter-cluster communication, enabling seamless pod IP routing and service discovery across multiple ROSA and non-ROSA Kubernetes clusters. This capability directly addresses the complexities of hybrid and multi-cloud deployments.
  • Granular Network Security and Policy Enforcement– ROSA’s native capabilities are augmented by Cilium’s fine-grained, application-aware network policies. Security teams can define policies based on namespaces, labels, and application-based rules, such as HTTP API paths. Cilium’s DNS-aware and L7 policies enhance compliance for ROSA clusters deployed in regulated environments by securing communication with only approved external services.
  • Comprehensive Observability Tailored for ROSA– Isovalent Enterprise provides ROSA users unparalleled observability through tools like Hubble. Users gain detailed flow visibility into pod-to-pod and pod-to-service traffic, helping identify bottlenecks or unauthorized access. This observability extends to L7 traffic, ensuring ROSA applications meet performance and security benchmarks.
  • Native AWS Integration with ROSA– Cilium’s capabilities, such as static egress gateways and load balancer support, enhance ROSA’s close integration with AWS services. These facilitate the connection of ROSA workloads to legacy environments or complex AWS network topologies, ensuring compatibility and operational efficiency in hybrid setups.
  • Sidecar-Less Service Mesh for ROSA Applications– Traditional service meshes add operational complexity and performance overhead. Cilium’s sidecar-less service mesh approach allows ROSA users to leverage features like traffic management, observability, and security without additional costs or resource requirements.
  • Certified for Red Hat OpenShift and Supported by Red Hat and AWS– Since Isovalent Enterprise for Cilium is a certified Red Hat OpenShift CNI plugin, ROSA users can confidently adopt it without concerns about supportability or compatibility. It integrates seamlessly with ROSA’s managed services, ensuring reliability backed by joint Red Hat and AWS support.

By adopting Isovalent Enterprise for Cilium, ROSA users can enhance their clusters, address critical networking, security, and observability challenges, and maintain operational simplicity.

What support do you get when you use ROSA & Isovalent Enterprise for Cilium?

Cilium is recognized as a Certified OpenShift CNI Plug-in by completing the operator certification and passing end-to-end testing. As such, Red Hat will collaborate with Isovalent to troubleshoot the issue, per their third-party software support statements. Customers who have chosen Isovalent Enterprise for Cilium can benefit from both vendors’ complete support experience with Cilium, working towards a common resolution.

Creating a ROSA with an HCP cluster without a CNI plugin

Prerequisites

  • Ensure that you have completed the AWS prerequisites.
    • Access to AWS. Create a new account for free.
    • Enabled the ROSA service in the AWS Console.
      • If this is not enabled, the --no-cni flag will not be available when creating the ROSA cluster.
    • Log in to your Red Hat account by using the ROSA CLI.
    • Available AWS service quotas.
    • Configured the latest ROSA CLI (rosa) on your installation host.
  • Ensure that you have a configured virtual private cloud (VPC).
  • Install kubectl
  • Install Helm
  • Install Cilium CLI: Cilium Enterprise provides a Cilium CLI tool that automatically collects all the logs and debug information needed to troubleshoot your Cilium Enterprise installation. You can install Cilium CLI for Linux, macOS, or other operating systems on your local machine(s) or server(s).

Basic Checks

  • Log into your Red Hat account by using the ROSA CLI.
rosa login
I: Logged in as 'amitmavgupta' on 'https://api.openshift.com'
  • Verify your user details.
rosa whoami

AWS ARN:                      arn:aws:iam::############:user/*********.********@isovalent.com
AWS Account ID:               ############
AWS Default Region:           ap-northeast-2
OCM API:                      https://api.openshift.com
OCM Account Email:            *********.********@isovalent.com
OCM Account ID:               ###########################
OCM Account Name:             Amit Gupta
OCM Account Username:         amitmavgupta
OCM Organization External ID: ########
OCM Organization ID:          ###########################
OCM Organization Name:        ##############
  • Verify the AWS quotas for creating a ROSA cluster.
rosa verify quota

I: Validating AWS quota...
I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html
  • Verify the ROSA Openshift Client
rosa verify openshift-client

I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.17.3

Creating an OpenID Connect configuration

When using a ROSA cluster, you can create the OpenID Connect (OIDC) configuration before creating your cluster. This configuration is registered to be used with Red Hat OpenShift Cluster Manager.

  • Create your OIDC configuration alongside the AWS resources.
rosa create oidc-config --mode=auto  --yes

? Would you like to create a Managed (Red Hat hosted) OIDC Configuration Yes
I: Setting up managed OIDC configuration
I: To create Operator Roles for this OIDC Configuration, run the following command and remember to replace <user-defined> with a prefix of your choice:
       rosa create operator-roles --prefix <user-defined> --oidc-config-id 2er2#######################gakp4
If you are going to create a Hosted Control Plane cluster please include '--hosted-cp'
I: Creating OIDC provider using 'arn:aws:iam::###########:role/PowerUserAccessRole'
I: Created OIDC provider with ARN 'arn:aws:iam::###########:oidc-provider/oidc.op1.openshiftapps.com/2er2#######################gakp4'

When creating your cluster, you must supply the OIDC config ID. The CLI output provides this value for –mode auto; otherwise, you must determine these values based on the AWS CLI output for –mode manual.

  • You can list the possible OIDC configurations available for the clusters associated with your user organization.
rosa list oidc-config

ID                                MANAGED  ISSUER URL                                                                   SECRET ARN
################################  true     https://rh-oidc.s3.us-east-1.amazonaws.com/################################
################################  true     https://oidc.op1.openshiftapps.com/################################
################################  true     https://oidc.op1.openshiftapps.com/################################
################################  true     https://dvbwgdztaeq9o.cloudfront.net/################################
################################  true     https://oidc.op1.openshiftapps.com/################################
################################  true     https://oidc.op1.openshiftapps.com/2er2#######################gakp4

Save the OIDC configuration ID as a variable to use later.

export OIDC_CONFIG_ID=2er2#######################gakp4

Creating the account-wide STS roles and policies

Before using the ROSA CLI to create ROSA clusters with HCP, the required account-wide roles and policies, including the Operator policies, must be created. If the required account-wide STS roles are not in your AWS account, create them and attach the policies.

rosa create account-roles --mode auto --hosted-cp
I: Logged in as 'amitmavgupta' on 'https://api.openshift.com'
I: Validating AWS credentials...
I: AWS credentials are valid!
I: Validating AWS quota...
I: AWS quota ok. If cluster installation fails, validate actual AWS resource usage against https://docs.openshift.com/rosa/rosa_getting_started/rosa-required-aws-service-quotas.html
I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.17.3
I: Creating account roles
I: Creating hosted CP account roles using 'arn:aws:iam::############:role/PowerUserAccessRole'
I: Attached trust policy to role 'ManagedOpenShift-HCP-ROSA-Installer-Role(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-HCP-ROSA-Installer-Role)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRole"], "Effect": "Allow", "Principal": {"AWS": ["arn:aws:iam::############:role/RH-Managed-OpenShift-Installer"]}}]}
I: Created role 'ManagedOpenShift-HCP-ROSA-Installer-Role' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Installer-Role'
I: Attached policy 'ROSAInstallerPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAInstallerPolicy)' to role 'ManagedOpenShift-HCP-ROSA-Installer-Role(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-HCP-ROSA-Installer-Role)'
I: Attached trust policy to role 'ManagedOpenShift-HCP-ROSA-Support-Role(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-HCP-ROSA-Support-Role)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRole"], "Effect": "Allow", "Principal": {"AWS": ["arn:aws:iam::############:role/RH-Technical-Support-########"]}}]}
I: Created role 'ManagedOpenShift-HCP-ROSA-Support-Role' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Support-Role'
I: Attached policy 'ROSASRESupportPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSASRESupportPolicy)' to role 'ManagedOpenShift-HCP-ROSA-Support-Role(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-HCP-ROSA-Support-Role)'
I: Attached trust policy to role 'ManagedOpenShift-HCP-ROSA-Worker-Role(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-HCP-ROSA-Worker-Role)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRole"], "Effect": "Allow", "Principal": {"Service": ["ec2.amazonaws.com"]}}]}
I: Created role 'ManagedOpenShift-HCP-ROSA-Worker-Role' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Worker-Role'
I: Attached policy 'ROSAWorkerInstancePolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAWorkerInstancePolicy)' to role 'ManagedOpenShift-HCP-ROSA-Worker-Role(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-HCP-ROSA-Worker-Role)'

Set your prefix as an environmental variable.

export PREFIX=ManagedOpenShift

Creating Operator roles and policies

When using a ROSA with an HCP cluster, you must create the Operator IAM roles required for ROSA with HCP deployments. The cluster Operators use the Operator roles to obtain the temporary permissions required to carry out cluster operations, such as managing back-end storage, cloud provider credentials, and external access to a cluster.

rosa create operator-roles --hosted-cp --prefix ${PREFIX} --oidc-config-id ${OIDC_CONFIG_ID}
? Role creation mode: auto
? Operator roles prefix: ManagedOpenShift
? Create hosted control plane operator roles: Yes
I: Using arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Installer-Role for the Installer role
? Permissions boundary ARN (optional):
I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
I: Creating roles using 'arn:aws:iam::############:role/PowerUserAccessRole'
I: Attached trust policy to role 'ManagedOpenShift-openshift-image-registry-installer-cloud-creden(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-openshift-image-registry-installer-cloud-creden)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRoleWithWebIdentity"], "Effect": "Allow", "Condition": {"StringEquals": {"oidc.op1.openshiftapps.com/2er2#######################gakp4:sub": ["system:serviceaccount:openshift-image-registry:cluster-image-registry-operator" , "system:serviceaccount:openshift-image-registry:registry"]}}, "Principal": {"Federated": "arn:aws:iam::############:oidc-provider/oidc.op1.openshiftapps.com/2er2##########################gakp4"}}]}
I: Created role 'ManagedOpenShift-openshift-image-registry-installer-cloud-creden' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-openshift-image-registry-installer-cloud-creden'
I: Attached policy 'ROSAImageRegistryOperatorPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAImageRegistryOperatorPolicy)' to role 'ManagedOpenShift-openshift-image-registry-installer-cloud-creden(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-openshift-image-registry-installer-cloud-creden)'
I: Attached trust policy to role 'ManagedOpenShift-openshift-ingress-operator-cloud-credentials(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-openshift-ingress-operator-cloud-credentials)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRoleWithWebIdentity"], "Effect": "Allow", "Condition": {"StringEquals": {"oidc.op1.openshiftapps.com/2er2#######################gakp4:sub": ["system:serviceaccount:openshift-ingress-operator:ingress-operator"]}}, "Principal": {"Federated": "arn:aws:iam::############:oidc-provider/oidc.op1.openshiftapps.com/2er2#######################gakp4"}}]}
I: Created role 'ManagedOpenShift-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-openshift-ingress-operator-cloud-credentials'
I: Attached policy 'ROSAIngressOperatorPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAIngressOperatorPolicy)' to role 'ManagedOpenShift-openshift-ingress-operator-cloud-credentials(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-openshift-ingress-operator-cloud-credentials)'
I: Attached trust policy to role 'ManagedOpenShift-kube-system-kube-controller-manager(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-kube-system-kube-controller-manager)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRoleWithWebIdentity"], "Effect": "Allow", "Condition": {"StringEquals": {"oidc.op1.openshiftapps.com/2er2#######################gakp4:sub": ["system:serviceaccount:kube-system:kube-controller-manager"]}}, "Principal": {"Federated": "arn:aws:iam::############:oidc-provider/oidc.op1.openshiftapps.com/2er2#######################gakp4"}}]}
I: Created role 'ManagedOpenShift-kube-system-kube-controller-manager' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-kube-system-kube-controller-manager'
I: Attached policy 'ROSAKubeControllerPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAKubeControllerPolicy)' to role 'ManagedOpenShift-kube-system-kube-controller-manager(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-kube-system-kube-controller-manager)'
I: Attached trust policy to role 'ManagedOpenShift-kube-system-capa-controller-manager(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-kube-system-capa-controller-manager)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRoleWithWebIdentity"], "Effect": "Allow", "Condition": {"StringEquals": {"oidc.op1.openshiftapps.com/2er2defuj5f71mn1fb2a9d1se3bgakp4:sub": ["system:serviceaccount:kube-system:capa-controller-manager"]}}, "Principal": {"Federated": "arn:aws:iam::############:oidc-provider/oidc.op1.openshiftapps.com/2er2#######################gakp4"}}]}
I: Created role 'ManagedOpenShift-kube-system-capa-controller-manager' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-kube-system-capa-controller-manager'
I: Attached policy 'ROSANodePoolManagementPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSANodePoolManagementPolicy)' to role 'ManagedOpenShift-kube-system-capa-controller-manager(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-kube-system-capa-controller-manager)'
I: Attached trust policy to role 'ManagedOpenShift-kube-system-control-plane-operator(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-kube-system-control-plane-operator)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRoleWithWebIdentity"], "Effect": "Allow", "Condition": {"StringEquals": {"oidc.op1.openshiftapps.com/2er2defuj5f71mn1fb2a9d1se3bgakp4:sub": ["system:serviceaccount:kube-system:control-plane-operator"]}}, "Principal": {"Federated": "arn:aws:iam::############:oidc-provider/oidc.op1.openshiftapps.com/2er2#######################gakp4"}}]}
I: Created role 'ManagedOpenShift-kube-system-control-plane-operator' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-kube-system-control-plane-operator'
I: Attached policy 'ROSAControlPlaneOperatorPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAControlPlaneOperatorPolicy)' to role 'ManagedOpenShift-kube-system-control-plane-operator(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-kube-system-control-plane-operator)'
I: Attached trust policy to role 'ManagedOpenShift-kube-system-kms-provider(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-kube-system-kms-provider)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRoleWithWebIdentity"], "Effect": "Allow", "Condition": {"StringEquals": {"oidc.op1.openshiftapps.com/2er2#######################gakp4:sub": ["system:serviceaccount:kube-system:kms-provider"]}}, "Principal": {"Federated": "arn:aws:iam::############:oidc-provider/oidc.op1.openshiftapps.com/2er2#######################gakp4"}}]}
I: Created role 'ManagedOpenShift-kube-system-kms-provider' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-kube-system-kms-provider'
I: Attached policy 'ROSAKMSProviderPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAKMSProviderPolicy)' to role 'ManagedOpenShift-kube-system-kms-provider(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-kube-system-kms-provider)'
I: Attached trust policy to role 'ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRoleWithWebIdentity"], "Effect": "Allow", "Condition": {"StringEquals": {"oidc.op1.openshiftapps.com/2er2#######################gakp4:sub": ["system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-operator" , "system:serviceaccount:openshift-cluster-csi-drivers:aws-ebs-csi-driver-controller-sa"]}}, "Principal": {"Federated": "arn:aws:iam::############:oidc-provider/oidc.op1.openshiftapps.com/2er2#######################gakp4"}}]}
I: Created role 'ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent'
I: Attached policy 'ROSAAmazonEBSCSIDriverOperatorPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAAmazonEBSCSIDriverOperatorPolicy)' to role 'ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent)'
I: Attached trust policy to role 'ManagedOpenShift-openshift-cloud-network-config-controller-cloud(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-openshift-cloud-network-config-controller-cloud)': {"Version": "2012-10-17", "Statement": [{"Action": ["sts:AssumeRoleWithWebIdentity"], "Effect": "Allow", "Condition": {"StringEquals": {"oidc.op1.openshiftapps.com/2er2#######################gakp4:sub": ["system:serviceaccount:openshift-cloud-network-config-controller:cloud-network-config-controller"]}}, "Principal": {"Federated": "arn:aws:iam::############:oidc-provider/oidc.op1.openshiftapps.com/2er2#######################gakp4"}}]}
I: Created role 'ManagedOpenShift-openshift-cloud-network-config-controller-cloud' with ARN 'arn:aws:iam::############:role/ManagedOpenShift-openshift-cloud-network-config-controller-cloud'

I: Attached policy 'ROSACloudNetworkConfigOperatorPolicy(https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSACloudNetworkConfigOperatorPolicy)' to role 'ManagedOpenShift-openshift-cloud-network-config-controller-cloud(https://console.aws.amazon.com/iam/home?#/roles/ManagedOpenShift-openshift-cloud-network-config-controller-cloud)'

I: To create a cluster with these roles, run the following command:
       rosa create cluster --sts --oidc-config-id 2er2#######################gakp4 --operator-roles-prefix ManagedOpenShift --hosted-cp

The Operator roles are now created and ready to use to build your ROSA cluster with HCP.

rosa list operator-roles

I: Fetching operator roles
ROLE PREFIX       AMOUNT IN BUNDLE
cilium            3
managedopenshift  8

Creating the AWS VPC

For this blog, we will use Terraform to create the VPC in which the ROSA cluster will be created. However, this is optional, and you can also create your VPC in multiple ways (Using Ansible, AWS CloudFormation, AWS portal, AWS CLI, etc).

git clone git@github.com:openshift-cs/terraform-vpc-example.git terraform-vpc-rosa
cd ./terraform-vpc-rosa
  • Set your region & cluster name as environmental variables.
export REGION="ap-northeast-2"
export CLUSTER_NAME=amit-rosa
  • Create a terraform.tfvars file that contains all the variables to create a VPC for the ROSA cluster.
cat > ./terraform.tfvars << EOF
Region               = "${REGION}"
Subnet_azs           = ["apne2-az1", "apne2-az2", "apne2-az3", "apne2-az4"]
Cluster_name         = "${CLUSTER_NAME}"
# Select a private range, plan with plenty of room for growth
Vpc_cidr             = "10.0.0.0/16"
# This determines whether there is more room for new subnets or whether the subnets are bigger
Subnet_cidr_prefix   = 24
Private_subnets_only = false
# Having the cluster stretched over 3 az is a common practice for high availability. Single zone clusters incur less cost and latency.
Single_az_only       = false
EOF
  • Create the VPC using Terraform using the following commands.
terraform init
terraform plan -out rosa.tfplan
terraform apply rosa.tfplan

subnet-04965a6c64f704992,subnet-0933242ccb44d28f4
  • Once the VPC has been created, Cilium firewall prerequisites must be added to the default worker security group as Inbound rules.
    • These are needed for Cilium-specific services (Hubble Relay, Cilium operator, Cilium agent, Spire agent, Mutual Authentication, Hubble Relay, Hubble Server, Cluster health checks) to work seamlessly.
TypeProtocolPortSourceDescription
Custom TCPTCP6060SG_SEC_GROUP_IDCilium Agent
Custom TCPTCP6061SG_SEC_GROUP_IDCilium Operator
Custom TCPTCP6062SG_SEC_GROUP_IDHubble Relay
Custom TCPTCP4250SG_SEC_GROUP_IDMutual Authentication
Custom TCPTCP4251SG_SEC_GROUP_IDSpire agent health check
Custom TCPTCP4240SG_SEC_GROUP_IDCluster health checks
Custom TCPTCP4244SG_SEC_GROUP_IDHubble server
Custom TCPTCP4245SG_SEC_GROUP_IDHubble relay
Custom TCPTCP8080SG_SEC_GROUP_IDFor Cilium test namespace
(connectivity tests)
Custom TCPTCP4000SG_SEC_GROUP_IDFor the Cilium test namespace
(connectivity tests)

Creating the cluster

When using the ROSA command line interface (CLI), rosa, to create a cluster, you can add an optional flag --no-cni to create a cluster without a CNI plugin.

  • Set the following as environmental variables.
export CLUSTER_NAME=amit-rosa
export AWS_ACCOUNT_ID=############
export external-id=amit
export OIDC_CONFIG_ID=2er2#######################gakp4
export OWNER=amit
export SUBNET_IDS=$(terraform output -raw cluster-subnets-string)
  • Brief details about the values being passed while creating the ROSA cluster.
-The MACHINE_CIDR is a subnet of the VPC, CIDR 10.0.0.0/16
-POD_CIDR and SERVICE_CIDR must not overlap with CIDRs the workload on the cluster connects to.
-HOST_PREFIX determines how the POD CIDR is split between the number of nodes that can be added to the cluster and the size of the node CIDR, hence the number of pods for each node.
  • Create the ROSA cluster.
rosa create cluster --cluster-name ${CLUSTER_NAME} --sts --role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/ManagedOpenShift-HCP-ROSA-Installer-Role --support-role-arn arn:aws:iam::${AWS_ACCOUNT_ID}:role/ManagedOpenShift-HCP-ROSA-Support-Role --worker-iam-role arn:aws:iam::${AWS_ACCOUNT_ID}:role/ManagedOpenShift-HCP-ROSA-Worker-Role --external-id ${EXTERNAL_ID} --operator-roles-prefix ${PREFIX} --oidc-config-id ${OIDC_CONFIG_ID} --tags "owner:${OWNER}" --region ap-northeast-2 --version ${OCP_VERSION} --replicas 3 --compute-machine-type m5.xlarge --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.1.0.0/16 --host-prefix 23 --subnet-ids ${SUBNET_IDS} --disable-workload-monitoring --hosted-cp --billing-account ${AWS_ACCOUNT_ID} --no-cni --watch
I: Reusable OIDC Configuration detected. Validating trusted relationships to operator roles:
I: Using 'arn:aws:iam::############:role/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent'
I: Using 'arn:aws:iam::############:role/ManagedOpenShift-openshift-cloud-network-config-controller-cloud'
I: Using 'arn:aws:iam::############:role/ManagedOpenShift-kube-system-kube-controller-manager'
I: Using 'arn:aws:iam::############:role/ManagedOpenShift-kube-system-capa-controller-manager'
I: Using 'arn:aws:iam::############:role/ManagedOpenShift-kube-system-control-plane-operator'
I: Using 'arn:aws:iam::############:role/ManagedOpenShift-kube-system-kms-provider'
I: Using 'arn:aws:iam::############:role/ManagedOpenShift-openshift-image-registry-installer-cloud-creden'
I: Using 'arn:aws:iam::############:role/ManagedOpenShift-openshift-ingress-operator-cloud-credentials'
I: Creating cluster 'amit-rosa'
I: To view a list of clusters and their status, run 'rosa list clusters'
I: Cluster 'amit-rosa' has been created.
I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.
Name:                       amit-rosa
Domain Prefix:              amit-rosa
Display Name:               amit-rosa
ID:                         2er2m5lqot4cmv091apq9k186n9ienea
External ID:                9c98f9bd-c25e-4322-a3f4-1eae688ca5c4
Control Plane:              ROSA Service Hosted
OpenShift Version:          4.17.2
Channel Group:              stable
DNS:                        Not ready
AWS Account:                ############
AWS Billing Account:        ############
API URL:
Console URL:
Region:                     ap-northeast-2
Availability:
- Control Plane:           MultiAZ
- Data Plane:              SingleAZ
Nodes:
- Compute (desired):       2
- Compute (current):       0
Network:
- Type:                    Other
- Service CIDR:            172.30.0.0/16
- Machine CIDR:            10.0.0.0/16
- Pod CIDR:                10.128.0.0/14
- Host Prefix:             /23
- Subnets:                 subnet-04965a6c64f704992, subnet-0933242ccb44d28f4
EC2 Metadata Http Tokens:   optional
Role (STS) ARN:             arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Installer-Role
STS External ID:            amit
Support Role ARN:           arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Support-Role
Instance IAM Roles:
- Worker:                  arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Worker-Role
Operator IAM Roles:
- arn:aws:iam::############:role/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent
- arn:aws:iam::############:role/ManagedOpenShift-openshift-cloud-network-config-controller-cloud
- arn:aws:iam::############:role/ManagedOpenShift-kube-system-kube-controller-manager
- arn:aws:iam::############:role/ManagedOpenShift-kube-system-capa-controller-manager
- arn:aws:iam::############:role/ManagedOpenShift-kube-system-control-plane-operator
- arn:aws:iam::############:role/ManagedOpenShift-kube-system-kms-provider
- arn:aws:iam::############:role/ManagedOpenShift-openshift-image-registry-installer-cloud-creden
- arn:aws:iam::############:role/ManagedOpenShift-openshift-ingress-operator-cloud-credentials
Managed Policies:           Yes
State:                      waiting (Waiting for user action)
Private:                    No
Delete Protection:          Disabled
Created:                    Nov  4 2024 07:34:52 UTC
User Workload Monitoring:   Disabled
Details Page:               https://console.redhat.com/openshift/details/s/2oNMgYHfRrNOKgPy6RZsk6UhZEM
OIDC Endpoint URL:          https://oidc.op1.openshiftapps.com/2er2defuj5f71mn1fb2a9d1se3bgakp4 (Managed)
Audit Log Forwarding:       Disabled
External Authentication:    Disabled
Etcd Encryption:            Disabled
I: Cluster 'amit-rosa' is in waiting state waiting for installation to begin. Logs will show up within 5 minutes
- 0001-01-01 00:00:00 +0000 UTC hostedclusters amit-rosa Version
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Waiting for hosted control plane to be healthy
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Ignition server deployment not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Configuration passes validation
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa HostedCluster is supported by operator configuration
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Release image is valid

2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa ValidAWSIdentityProvider StatusUnknown
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Reconciliation active on resource
2024-11-04 07:38:29 +0000 UTC hostedclusters amit-rosa HostedCluster is at expected version
2024-11-04 07:38:33 +0000 UTC hostedclusters amit-rosa Required platform credentials are found
2024-11-04 07:38:33 +0000 UTC hostedclusters amit-rosa failed to get referenced secret ocm-production-################################/cluster-api-cert: Secret "cluster-api-cert" not found
/ 0001-01-01 00:00:00 +0000 UTC hostedclusters amit-rosa Version
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Reconciliation active on resource
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa The hosted control plane is not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Ignition server deployment not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Configuration passes validation
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa HostedCluster is supported by operator configuration
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Waiting for hosted control plane to be healthy
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Release image is valid
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa ValidAWSIdentityProvider StatusUnknown
2024-11-04 07:38:29 +0000 UTC hostedclusters amit-rosa HostedCluster is at expected version
2024-11-04 07:38:33 +0000 UTC hostedclusters amit-rosa Required platform credentials are found
2024-11-04 07:40:12 +0000 UTC hostedclusters amit-rosa Reconciliation completed successfully
2024-11-04 07:40:12 +0000 UTC hostedclusters amit-rosa OIDC configuration is valid
2024-11-04 07:40:12 +0000 UTC hostedclusters amit-rosa Reconciliation completed successfully
2024-11-04 07:40:26 +0000 UTC hostedclusters amit-rosa WebIdentityErr
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa lookup api.amit-rosa.eypm.p3.openshiftapps.com on 172.30.0.10:53: no such host
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa capi-provider deployment has 1 unavailable replicas
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa Configuration passes validation
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa AWS KMS is not configured
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa EtcdAvailable StatefulSetNotFound
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa Kube APIServer deployment not found
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa router load balancer is not provisioned; 1s since creation.; private-router load balancer is not provisioned; 1s since creation.; router load balancer is not provisioned; 1s since creation.
0001-01-01 00:00:00 +0000 UTC hostedclusters amit-rosa Version
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Release image is valid
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Configuration passes validation
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Ignition server deployment not found
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Waiting for hosted control plane kubeconfig to be created
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa HostedCluster is supported by operator configuration
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the HCP
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Reconciliation active on resource
2024-11-04 07:38:29 +0000 UTC hostedclusters amit-rosa HostedCluster is at expected version
2024-11-04 07:38:33 +0000 UTC hostedclusters amit-rosa Required platform credentials are found
2024-11-04 07:40:12 +0000 UTC hostedclusters amit-rosa OIDC configuration is valid
2024-11-04 07:40:12 +0000 UTC hostedclusters amit-rosa Reconciliation completed successfully
2024-11-04 07:40:26 +0000 UTC hostedclusters amit-rosa WebIdentityErr
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa lookup api.amit-rosa.eypm.p3.openshiftapps.com on 172.30.0.10:53: no such host
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa capi-provider deployment has 1 unavailable replicas
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa Configuration passes validation
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa AWS KMS is not configured
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa EtcdAvailable StatefulSetNotFound
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa Kube APIServer deployment not found
2024-11-04 07:40:38 +0000 UTC hostedclusters amit-rosa All is well
2024-11-04 07:40:50 +0000 UTC hostedclusters amit-rosa All is well
\ 2024-11-04 07:41:39 +0000 UTC hostedclusters amit-rosa EtcdAvailable QuorumAvailable
| 2024-11-04 07:42:45 +0000 UTC hostedclusters amit-rosa Kube APIServer deployment is available
2024-11-04 07:42:48 +0000 UTC hostedclusters amit-rosa All is well
2024-11-04 07:43:07 +0000 UTC hostedclusters amit-rosa The hosted cluster is not degraded
\ 0001-01-01 00:00:00 +0000 UTC hostedclusters amit-rosa Version
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Release image is valid
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa HostedCluster is supported by operator configuration
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Reconciliation active on resource
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Configuration passes validation
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:25 +0000 UTC hostedclusters amit-rosa Condition not found in the CVO.
2024-11-04 07:38:29 +0000 UTC hostedclusters amit-rosa HostedCluster is at expected version
2024-11-04 07:38:33 +0000 UTC hostedclusters amit-rosa Required platform credentials are found
2024-11-04 07:40:12 +0000 UTC hostedclusters amit-rosa Reconciliation completed successfully
2024-11-04 07:40:12 +0000 UTC hostedclusters amit-rosa OIDC configuration is valid
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa AWS KMS is not configured
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa lookup api.amit-rosa.eypm.p3.openshiftapps.com on 172.30.0.10:53: no such host
2024-11-04 07:40:31 +0000 UTC hostedclusters amit-rosa Configuration passes validation
2024-11-04 07:40:38 +0000 UTC hostedclusters amit-rosa All is well
2024-11-04 07:40:50 +0000 UTC hostedclusters amit-rosa All is well
2024-11-04 07:41:39 +0000 UTC hostedclusters amit-rosa EtcdAvailable QuorumAvailable
2024-11-04 07:42:45 +0000 UTC hostedclusters amit-rosa Kube APIServer deployment is available
2024-11-04 07:42:48 +0000 UTC hostedclusters amit-rosa All is well
2024-11-04 07:43:33 +0000 UTC hostedclusters amit-rosa [catalog-operator deployment has 1 unavailable replicas, certified-operators-catalog deployment has 2 unavailable replicas, cloud-credential-operator deployment has 1 unavailable replicas, cluster-network-operator deployment has 1 unavailable replicas, cluster-storage-operator deployment has 1 unavailable replicas, community-operators-catalog deployment has 2 unavailable replicas, csi-snapshot-controller-operator deployment has 1 unavailable replicas, dns-operator deployment has 1 unavailable replicas, hosted-cluster-config-operator deployment has 1 unavailable replicas, ignition-server deployment has 1 unavailable replicas, ingress-operator deployment has 1 unavailable replicas, olm-operator deployment has 1 unavailable replicas, packageserver deployment has 3 unavailable replicas, redhat-marketplace-catalog deployment has 2 unavailable replicas, redhat-operators-catalog deployment has 2 unavailable replicas, router deployment has 1 unavailable replicas]
2024-11-04 07:43:36 +0000 UTC hostedclusters amit-rosa All is well
2024-11-04 07:43:43 +0000 UTC hostedclusters amit-rosa The hosted control plane is available
- I: Cluster 'amit-rosa' is now ready
  • Check the status of your cluster by running the following command.
rosa describe cluster -c $CLUSTER_NAME
Name:                       amit-rosa
Domain Prefix:              amit-rosa
Display Name:               amit-rosa
ID:                         ###################################
External ID:                ###################################
Control Plane:              ROSA Service Hosted
OpenShift Version:          4.17.2
Channel Group:              stable
DNS:                        amit-rosa.eypm.p3.openshiftapps.com
AWS Account:                ############
AWS Billing Account:        ############
API URL:                    https://api.amit-rosa.eypm.p3.openshiftapps.com:443
Console URL:
Region:                     ap-northeast-2
Availability:
- Control Plane:           MultiAZ
- Data Plane:              SingleAZ
Nodes:
- Compute (desired):       2
- Compute (current):       0
Network:
- Type:                    Other
- Service CIDR:            172.30.0.0/16
- Machine CIDR:            10.0.0.0/16
- Pod CIDR:                10.128.0.0/14
- Host Prefix:             /23
- Subnets:                 subnet-04965a6c64f704992, subnet-0933242ccb44d28f4
EC2 Metadata Http Tokens:   optional
Role (STS) ARN:             arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Installer-Role
STS External ID:            amit
Support Role ARN:           arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Support-Role
Instance IAM Roles:
- Worker:                  arn:aws:iam::############:role/ManagedOpenShift-HCP-ROSA-Worker-Role
Operator IAM Roles:
- arn:aws:iam::############:role/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent
- arn:aws:iam::############:role/ManagedOpenShift-openshift-cloud-network-config-controller-cloud
- arn:aws:iam::############:role/ManagedOpenShift-kube-system-kube-controller-manager
- arn:aws:iam::############:role/ManagedOpenShift-kube-system-capa-controller-manager
- arn:aws:iam::############:role/ManagedOpenShift-kube-system-control-plane-operator
- arn:aws:iam::############:role/ManagedOpenShift-kube-system-kms-provider
- arn:aws:iam::############:role/ManagedOpenShift-openshift-image-registry-installer-cloud-creden
- arn:aws:iam::926702434684:role/ManagedOpenShift-openshift-ingress-operator-cloud-credentials
Managed Policies:           Yes
State:                      ready
Private:                    No
Delete Protection:          Disabled
Created:                    Nov  4 2024 07:34:52 UTC
User Workload Monitoring:   Disabled
Details Page:               https://console.redhat.com/openshift/details/s/##########################
OIDC Endpoint URL:          https://oidc.op1.openshiftapps.com/############################## (Managed)
Audit Log Forwarding:       Disabled
External Authentication:    Disabled
Etcd Encryption:            Disabled

Login to the ROSA cluster

  • You can log in to the ROSA cluster using ROSA CLI.
rosa create admin --cluster=${CLUSTER_NAME}

I: Admin account has been added to cluster 'amit-rosa'.
I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user.
I: To login, run the following command:

oc login https://api.amit-rosa.eypm.p3.openshiftapps.com:443 --username cluster-admin --password ########################
I: It may take several minutes for this access to become active.
  • Although ROSA with HCP cluster installation is complete, the cluster cannot operate without a CNI plugin. Because the nodes are not ready, the workloads cannot be deployed. For example, the ROSA cluster web console is unavailable, so you must use the Red Hat OpenShift CLI (oc) to log in to the cluster. Other Red Hat OpenShift components, such as the HAProxy-based Ingress Controller, image registry, and Prometheus-based monitoring stack, are not running. This is expected behavior until you install a CNI provider.
oc get nodes -o wide -A
NAME                                           STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                                                KERNEL-VERSION                 CONTAINER-RUNTIME
ip-10-0-0-62.ap-northeast-2.compute.internal   NotReady    worker   47h   v1.30.5   10.0.0.62     <none>        Red Hat Enterprise Linux CoreOS 417.94.202410160352-0   5.14.0-427.40.1.el9_4.x86_64   cri-o://1.30.6-5.rhaos4.17.git690d4d6.el9
ip-10-0-0-65.ap-northeast-2.compute.internal   NotReady    worker   47h   v1.30.5   10.0.0.65     <none>        Red Hat Enterprise Linux CoreOS 417.94.202410160352-0   5.14.0-427.40.1.el9_4.x86_64   cri-o://1.30.6-5.rhaos4.17.git690d4d6.el9

Install Isovalent as the CNI

Install Isovalent as the CNI.

  • Create a namespace for installing Cilium.
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Namespace
metadata:
 labels:
   kubernetes.io/metadata.name: cilium
   name: cilium
   openshift.io/cluster-logging: "true"
   openshift.io/cluster-monitoring: "true"
   openshift.io/run-level: "0"
   pod-security.kubernetes.io/audit: privileged
   pod-security.kubernetes.io/audit-version: v1.24
   pod-security.kubernetes.io/warn: privileged
   pod-security.kubernetes.io/warn-version: v1.24
   pod-security.kubernetes.io/enforce: privileged
   pod-security.kubernetes.io/enforce-version: v1.24
 name: cilium
spec:
 finalizers:
 - kubernetes
EOF
  • Create a yaml file with the desired helm values for Cilium deployment.
ipam:
  mode: "cluster-pool"
  operator:
    clusterPoolIPv4MaskSize: 23
    clusterPoolIPv4PodCIDRList:
    - "10.128.0.0/14"
ipv4:
  enabled: true
ipv6:
  enabled: false
# tunnelPort and clusterHealthPort need to be open, which is the case without any further configuration with these values and not with the default ones.
tunnelPort: 4789
clusterHealthPort: 9940
cni:
  chainingMode: portmap
  binPath: "/var/lib/cni/bin"
  confPath: "/var/run/multus/cni/net.d"
k8s:
  requireIPv4PodCIDR: true
endpointRoutes:
  enabled: true
bpf:
  preallocateMaps: true
identityChangeGracePeriod: 0s
sessionAffinity: true
hubble:
  enabled: true
  tls:
    enabled: true
  relay:
    enabled: true
  ui:
    enabled: true
cluster:
  name: amit-rosa
debug:
  enabled: false
  • With the ROSA cluster up and running, you can install Isovalent Enterprise for Cilium via Helm values, and access to Enterprise Helm charts can be obtained by contacting sales or support@isovalent.com
  • Once Isovalent Enterprise for Cilium is installed, now check the status of the nodes and pods.
oc get nodes -o wide -A

NAME                                           STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE
                 KERNEL-VERSION                 CONTAINER-RUNTIME
ip-10-0-0-62.ap-northeast-2.compute.internal   Ready    worker   47h   v1.30.5   10.0.0.62     <none>        Red Hat Enterprise Linux CoreOS 417.94.202410160352-0   5.14.0-427.40.1.el9_4.x86_64   cri-o://1.30.6-5.rhaos4.17.git690d4d6.el9
ip-10-0-0-65.ap-northeast-2.compute.internal   Ready    worker   47h   v1.30.5   10.0.0.65     <none>        Red Hat Enterprise Linux CoreOS 417.94.202410160352-0   5.14.0-427.40.1.el9_4.x86_64   cri-o://1.30.6-5.rhaos4.17.git690d4d6.el9
oc get pods -o wide -A

NAMESPACE                                          NAME                                                                READY   STATUS      RESTARTS      AGE     IP             NODE                                           NOMINATED NODE   READINESS GATES
cilium-test-1                                      client-974f6c69d-zdssr                                              1/1     Running     0             44h     10.128.0.163   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
cilium-test-1                                      client2-57cf4468f-w5kf7                                             1/1     Running     0             44h     10.128.1.34    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
cilium-test-1                                      client3-67f959dd9b-n2pfg                                            1/1     Running     0             44h     10.128.3.81    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
cilium-test-1                                      echo-other-node-796bd758f9-hk72z                                    2/2     Running     0             44h     10.128.3.5     ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
cilium-test-1                                      echo-same-node-c549568d9-k42f8                                      2/2     Running     0             44h     10.128.1.160   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
cilium-test-1                                      host-netns-l45bc                                                    1/1     Running     0             44h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
cilium-test-1                                      host-netns-w2w8d                                                    1/1     Running     0             44h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
cilium                                             cilium-7wwg6                                                        1/1     Running     0             45h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
cilium                                             cilium-operator-6d7f9fb6f-wpj8g                                     1/1     Running     0             45h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
cilium                                             cilium-operator-6d7f9fb6f-z69n9                                     1/1     Running     0             45h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
cilium                                             cilium-p9nn4                                                        1/1     Running     0             45h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
cilium                                             hubble-relay-97c7cb898-nq65x                                        1/1     Running     0             45h     10.128.2.118   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
cilium                                             hubble-ui-5dc9c647db-5k675                                          2/2     Running     0             45h     10.128.2.107   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
kube-system                                        konnectivity-agent-2cnt9                                            1/1     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
kube-system                                        konnectivity-agent-bvn7m                                            1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
kube-system                                        kube-apiserver-proxy-ip-10-0-0-62.ap-northeast-2.compute.internal   1/1     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
kube-system                                        kube-apiserver-proxy-ip-10-0-0-65.ap-northeast-2.compute.internal   1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-backplane-managed-scripts                osd-delete-backplane-script-resources-28847562-nwtnv                0/1     Completed   0             7h39m   10.128.0.164   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-backplane                                osd-delete-backplane-serviceaccounts-28848000-zm8pd                 0/1     Completed   0             21m     10.128.0.199   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-backplane                                osd-delete-backplane-serviceaccounts-28848010-6mqkr                 0/1     Completed   0             11m     10.128.0.245   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-backplane                                osd-delete-backplane-serviceaccounts-28848020-rp7wk                 0/1     Completed   0             65s     10.128.1.75    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-cluster-csi-drivers                      aws-ebs-csi-driver-node-hggpm                                       3/3     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-cluster-csi-drivers                      aws-ebs-csi-driver-node-hq25h                                       3/3     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-cluster-node-tuning-operator             tuned-qqqgk                                                         1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-cluster-node-tuning-operator             tuned-wrn9m                                                         1/1     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-cluster-samples-operator                 cluster-samples-operator-78f46b77f-95qfp                            2/2     Running     0             2d      10.128.1.48    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-console-operator                         console-operator-9b9b44fc-6g77s                                     1/1     Running     2 (47h ago)   2d      10.128.1.167   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-console                                  console-6476b6fdbc-hlpmz                                            1/1     Running     0             47h     10.128.3.23    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-console                                  console-6476b6fdbc-jk7z7                                            1/1     Running     0             47h     10.128.0.53    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-console                                  downloads-6bdd89b5d6-fw8qg                                          1/1     Running     0             47h     10.128.3.190   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-console                                  downloads-6bdd89b5d6-lw7rr                                          1/1     Running     0             47h     10.128.0.119   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-dns                                      dns-default-6btkf                                                   2/2     Running     0             47h     10.128.3.34    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-dns                                      dns-default-m2dsf                                                   2/2     Running     0             47h     10.128.0.166   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-dns                                      node-resolver-8mw6j                                                 1/1     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-dns                                      node-resolver-hlj9q                                                 1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-image-registry                           image-pruner-28846080-4tg67                                         0/1     Completed   0             32h     10.128.1.121   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-image-registry                           image-pruner-28847520-rvk8k                                         0/1     Completed   0             8h      10.128.0.32    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-image-registry                           image-registry-7dbdd67497-j46hc                                     1/1     Running     0             47h     10.128.1.173   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-image-registry                           image-registry-7dbdd67497-nqjng                                     1/1     Running     0             47h     10.128.2.26    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-image-registry                           node-ca-92cwp                                                       1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-image-registry                           node-ca-hh49z                                                       1/1     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-ingress-canary                           ingress-canary-zh8vg                                                1/1     Running     0             47h     10.128.0.238   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-ingress-canary                           ingress-canary-zlcj4                                                1/1     Running     0             47h     10.128.2.93    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-ingress                                  router-default-59d74d7688-c64qx                                     1/1     Running     0             47h     10.128.3.98    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-ingress                                  router-default-59d74d7688-qsdc6                                     1/1     Running     0             2d      10.128.0.45    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-insights                                 insights-operator-55dd556b86-z6c9s                                  1/1     Running     1 (47h ago)   2d      10.128.1.234   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-kube-proxy                               openshift-kube-proxy-95gdg                                          2/2     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-kube-proxy                               openshift-kube-proxy-kp5gd                                          2/2     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-kube-storage-version-migrator-operator   kube-storage-version-migrator-operator-6c76f55d85-fzx77             1/1     Running     1 (47h ago)   2d      10.128.1.184   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-kube-storage-version-migrator            migrator-b6db546c6-4cvdj                                            1/1     Running     0             47h     10.128.0.77    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-machine-config-operator                  kube-rbac-proxy-crio-ip-10-0-0-62.ap-northeast-2.compute.internal   1/1     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-machine-config-operator                  kube-rbac-proxy-crio-ip-10-0-0-65.ap-northeast-2.compute.internal   1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               alertmanager-main-0                                                 6/6     Running     0             47h     10.128.2.94    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               alertmanager-main-1                                                 6/6     Running     0             47h     10.128.0.33    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               cluster-monitoring-operator-69d8b7d94b-9gcrm                        1/1     Running     0             2d      10.128.1.94    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               kube-state-metrics-6944cc6f8b-jwkqf                                 3/3     Running     0             47h     10.128.3.191   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               metrics-server-85fdf766d7-7dx7m                                     1/1     Running     0             39m     10.128.0.2     ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               metrics-server-85fdf766d7-q4qrv                                     1/1     Running     0             39m     10.128.2.114   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               monitoring-plugin-59f85b9f48-2l4x9                                  1/1     Running     0             47h     10.128.2.11    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               monitoring-plugin-59f85b9f48-xdhgl                                  1/1     Running     0             47h     10.128.0.162   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               node-exporter-8vlqm                                                 2/2     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               node-exporter-gft95                                                 2/2     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               openshift-state-metrics-78c785847d-gd92z                            3/3     Running     0             47h     10.128.2.106   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               prometheus-k8s-0                                                    6/6     Running     0             47h     10.128.2.166   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               prometheus-k8s-1                                                    6/6     Running     0             47h     10.128.0.195   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               prometheus-operator-6647f45795-7mrfm                                2/2     Running     0             47h     10.128.3.112   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               prometheus-operator-admission-webhook-6554b594c4-cbzll              1/1     Running     0             47h     10.128.2.105   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               prometheus-operator-admission-webhook-6554b594c4-fxrmz              1/1     Running     0             47h     10.128.0.91    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               telemeter-client-c9b99c478-7k8zw                                    3/3     Running     0             47h     10.128.2.55    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               thanos-querier-596847fc4b-c7gf5                                     6/6     Running     0             47h     10.128.3.65    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-monitoring                               thanos-querier-596847fc4b-tsrjm                                     6/6     Running     0             47h     10.128.0.218   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-multus                                   multus-44c2x                                                        1/1     Running     1 (47h ago)   47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-multus                                   multus-additional-cni-plugins-pm2zh                                 1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-multus                                   multus-additional-cni-plugins-rbxph                                 1/1     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-multus                                   multus-v52st                                                        1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-multus                                   network-metrics-daemon-4rtt7                                        2/2     Running     0             47h     10.128.0.17    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-multus                                   network-metrics-daemon-dm24n                                        2/2     Running     0             47h     10.128.3.175   ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-network-console                          networking-console-plugin-7fd9675d86-5h8tj                          1/1     Running     0             2d      10.128.1.235   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-network-console                          networking-console-plugin-7fd9675d86-mdxcb                          1/1     Running     0             2d      10.128.0.63    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-network-diagnostics                      network-check-source-54b8b5c596-gqhpx                               1/1     Running     0             2d      10.128.0.51    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-network-diagnostics                      network-check-target-6qk5q                                          1/1     Running     0             47h     10.128.2.67    ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-network-diagnostics                      network-check-target-trpdt                                          1/1     Running     0             47h     10.128.0.29    ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-network-operator                         iptables-alerter-kvlb4                                              1/1     Running     0             47h     10.0.0.65      ip-10-0-0-65.ap-northeast-2.compute.internal   <none>           <none>
openshift-network-operator                         iptables-alerter-x9k6w                                              1/1     Running     0             47h     10.0.0.62      ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-service-ca-operator                      service-ca-operator-6d56bd87cb-hfz88                                1/1     Running     1 (47h ago)   2d      10.128.0.126   ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>
openshift-service-ca                               service-ca-65878fb8fb-62fdp                                         1/1     Running     0             47h     10.128.0.1     ip-10-0-0-62.ap-northeast-2.compute.internal   <none>           <none>

Isovalent Status Checks

  • Check Cilium status & validate Cilium version
kubectl exec -n cilium -ti ds/cilium -- cilium status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.30 (v1.30.5) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    False   [ens5   10.0.0.65 fe80::8c00:27fe:f848:bfe8]
Host firewall:           Disabled
SRv6:                    Disabled
CNI Chaining:            portmap
CNI Config file:         successfully wrote CNI configuration file to /host/etc/cni/net.d/05-cilium.conflist
Cilium:                  Ok   1.15.5-cee.1 (v1.15.5-cee.1-e6056c28)
NodeMonitor:             Listening for events on 4 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok
IPAM:                    IPv4: 24/510 allocated from 10.128.2.0/23,
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       150/150 healthy
Proxy Status:            OK, ip 10.128.3.182, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 4095/4095 (100.00%), Flows/s: 82.15   Metrics: Disabled
Encryption:              Disabled
Cluster health:          2/2 reachable   (2024-11-06T08:22:17Z)
Modules Health:          Stopped(0) Degraded(0) OK(11)
  • Validate health check.
    • cilium-health is a tool available in Cilium that provides visibility into the overall health of the cluster’s networking and connectivity. You can check node-to-node health with cilium-health status.
kubectl exec -n cilium -ti ds/cilium -- cilium-health status

Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Probe time:   2024-11-06T08:24:17Z
Nodes:
  amit-rosa/ip-10-0-0-65.ap-northeast-2.compute.internal (localhost):
    Host connectivity to 10.0.0.65:
      ICMP to stack:   OK, RTT=275.771µs
      HTTP to agent:   OK, RTT=151.67µs
    Endpoint connectivity to 10.128.3.9:
      ICMP to stack:   OK, RTT=299.903µs
      HTTP to agent:   OK, RTT=343.729µs
  amit-rosa/ip-10-0-0-62.ap-northeast-2.compute.internal:
    Host connectivity to 10.0.0.62:
      ICMP to stack:   OK, RTT=312.803µs
      HTTP to agent:   OK, RTT=367.915µs
    Endpoint connectivity to 10.128.0.52:
      ICMP to stack:   OK, RTT=416.841µs
      HTTP to agent:   OK, RTT=538.236µs
  • Cilium Connectivity Test
    • The Cilium connectivity test deploys a series of services and deployments, and CiliumNetworkPolicy will use various connectivity paths to connect. Connectivity paths include with and without service load-balancing and various network policy combinations.
cat <<EOF | kubectl create -f -
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
  name: cilium-test
allowHostPorts: true
allowHostNetwork: true
users:
  - system:serviceaccount:cilium-test:default
  - system:serviceaccount:cilium-test:client
  - system:serviceaccount:cilium-test:client2
  - system:serviceaccount:cilium-test:client3
  - system:serviceaccount:cilium-test:echo-other-node
  - system:serviceaccount:cilium-test:echo-same-node
  - system:serviceaccount:cilium-test-1:default
  - system:serviceaccount:cilium-test-1:client
  - system:serviceaccount:cilium-test-1:client2
  - system:serviceaccount:cilium-test-1:client3
  - system:serviceaccount:cilium-test-1:echo-other-node
  - system:serviceaccount:cilium-test-1:echo-same-node
priority: null
readOnlyRootFilesystem: false
runAsUser:
  # This is required by the json-server image
  # POST requests cause a file access issue otherwise
  type: RunAsAny
seLinuxContext:
  type: MustRunAs
volumes: null
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostPID: false
allowPrivilegeEscalation: false
allowPrivilegedContainer: false
allowedCapabilities: 
  - NET_RAW
defaultAddCapabilities: null
requiredDropCapabilities: null
groups: null
EOF
cilium connectivity test -n cilium 

⚠️  Assuming Cilium version 1.15.5 for connectivity tests
ℹ️  Monitor aggregation detected, will skip some flow validation steps
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Creating namespace cilium-test-1 for connectivity check...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Deploying echo-same-node service...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Deploying DNS test server configmap...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Deploying same-node deployment...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Deploying client deployment...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Deploying client2 deployment...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Deploying client3 deployment...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Deploying echo-other-node service...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Deploying other-node deployment...
[host-netns] Deploying api-amit-rosa-eypm-p3-openshiftapps-com:443 daemonset...
[host-netns-non-cilium] Deploying api-amit-rosa-eypm-p3-openshiftapps-com:443 daemonset...
ℹ️  Skipping tests that require a node Without Cilium
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for deployment cilium-test-1/client to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for deployment cilium-test-1/client2 to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for deployment cilium-test-1/echo-same-node to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for deployment cilium-test-1/client3 to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for deployment cilium-test-1/echo-other-node to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client-974f6c69d-zdssr to reach DNS server on cilium-test-1/echo-same-node-c549568d9-k42f8 pod...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client2-57cf4468f-w5kf7 to reach DNS server on cilium-test-1/echo-same-node-c549568d9-k42f8 pod...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client3-67f959dd9b-n2pfg to reach DNS server on cilium-test-1/echo-same-node-c549568d9-k42f8 pod...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client-974f6c69d-zdssr to reach DNS server on cilium-test-1/echo-other-node-796bd758f9-hk72z pod...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client2-57cf4468f-w5kf7 to reach DNS server on cilium-test-1/echo-other-node-796bd758f9-hk72z pod...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client3-67f959dd9b-n2pfg to reach DNS server on cilium-test-1/echo-other-node-796bd758f9-hk72z pod...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client-974f6c69d-zdssr to reach default/kubernetes service...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client2-57cf4468f-w5kf7 to reach default/kubernetes service...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for pod cilium-test-1/client3-67f959dd9b-n2pfg to reach default/kubernetes service...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for Service cilium-test-1/echo-other-node to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for Service cilium-test-1/echo-other-node to be synchronized by Cilium pod cilium/cilium-7wwg6
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for Service cilium-test-1/echo-other-node to be synchronized by Cilium pod cilium/cilium-p9nn4
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for Service cilium-test-1/echo-same-node to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for Service cilium-test-1/echo-same-node to be synchronized by Cilium pod cilium/cilium-7wwg6
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for Service cilium-test-1/echo-same-node to be synchronized by Cilium pod cilium/cilium-p9nn4
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for NodePort 10.0.0.65:32408 (cilium-test-1/echo-other-node) to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for NodePort 10.0.0.65:32184 (cilium-test-1/echo-same-node) to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for NodePort 10.0.0.62:32408 (cilium-test-1/echo-other-node) to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for NodePort 10.0.0.62:32184 (cilium-test-1/echo-same-node) to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for DaemonSet cilium-test-1/host-netns-non-cilium to become ready...
[api-amit-rosa-eypm-p3-openshiftapps-com:443] Waiting for DaemonSet cilium-test-1/host-netns to become ready...
ℹ️  Skipping IPCache check
🔭 Enabling Hubble telescope...
ℹ️  Expose Relay locally with:
  cilium hubble enable
  cilium hubble port-forward&
ℹ️  Cilium version: 1.15.5
🏃[cilium-test-1] Running 86 tests ...
[=] [cilium-test-1] Test [no-unexpected-packet-drops] [1/86]
..
[=] [cilium-test-1] Test [no-policies] [2/86]
.................................................
[=] [cilium-test-1] Skipping test [no-policies-from-outside] [3/86] (skipped by condition)
[=] [cilium-test-1] Test [no-policies-extra] [4/86]
............
[=] [cilium-test-1] Test [allow-all-except-world] [5/86]
........................
[=] [cilium-test-1] Test [client-ingress] [6/86]
......
[=] [cilium-test-1] Test [client-ingress-knp] [7/86]
......
[=] [cilium-test-1] Test [allow-all-with-metrics-check] [8/86]
......
[=] [cilium-test-1] Test [all-ingress-deny] [9/86]
............
[=] [cilium-test-1] Skipping test [all-ingress-deny-from-outside] [10/86] (skipped by condition)
[=] [cilium-test-1] Test [all-ingress-deny-knp] [11/86]
............
[=] [cilium-test-1] Test [all-egress-deny] [12/86]
........................
[=] [cilium-test-1] Test [all-egress-deny-knp] [13/86]
........................
[=] [cilium-test-1] Test [all-entities-deny] [14/86]
............
[=] [cilium-test-1] Test [cluster-entity] [15/86]
...
[=] [cilium-test-1] Skipping test [cluster-entity-multi-cluster] [16/86] (skipped by condition)
[=] [cilium-test-1] Test [host-entity-egress] [17/86]
......
[=] [cilium-test-1] Test [host-entity-ingress] [18/86]
....
[=] [cilium-test-1] Test [echo-ingress] [19/86]
......
[=] [cilium-test-1] Skipping test [echo-ingress-from-outside] [20/86] (skipped by condition)
[=] [cilium-test-1] Test [echo-ingress-knp] [21/86]
......
[=] [cilium-test-1] Test [client-ingress-icmp] [22/86]
......
[=] [cilium-test-1] Test [client-egress] [23/86]
......
[=] [cilium-test-1] Test [client-egress-knp] [24/86]
......
[=] [cilium-test-1] Test [client-egress-expression] [25/86]
......
[=] [cilium-test-1] Test [client-egress-expression-knp] [26/86]
......
[=] [cilium-test-1] Test [client-with-service-account-egress-to-echo] [27/86]
......
[=] [cilium-test-1] Test [client-egress-to-echo-service-account] [28/86]
......
[=] [cilium-test-1] Test [to-entities-world] [29/86]
.........
[=] [cilium-test-1] Test [to-cidr-external] [30/86]
......
[=] [cilium-test-1] Test [to-cidr-external-knp] [31/86]
......
[=] [cilium-test-1] Skipping test [from-cidr-host-netns] [32/86] (skipped by condition)
[=] [cilium-test-1] Test [echo-ingress-from-other-client-deny] [33/86]
..........
[=] [cilium-test-1] Test [client-ingress-from-other-client-icmp-deny] [34/86]
............
[=] [cilium-test-1] Test [client-egress-to-echo-deny] [35/86]
............
[=] [cilium-test-1] Test [client-ingress-to-echo-named-port-deny] [36/86]
....
[=] [cilium-test-1] Test [client-egress-to-echo-expression-deny] [37/86]
....
[=] [cilium-test-1] Test [client-with-service-account-egress-to-echo-deny] [38/86]
....
[=] [cilium-test-1] Test [client-egress-to-echo-service-account-deny] [39/86]
..
[=] [cilium-test-1] Test [client-egress-to-cidr-deny] [40/86]
......
[=] [cilium-test-1] Test [client-egress-to-cidr-deny-default] [41/86]
......
[=] [cilium-test-1] Skipping test [clustermesh-endpointslice-sync] [42/86] (skipped by condition)
[=] [cilium-test-1] Test [health] [43/86]
..
[=] [cilium-test-1] Skipping test [north-south-loadbalancing] [44/86] (Feature node-without-cilium is disabled)
[=] [cilium-test-1] Test [pod-to-pod-encryption] [45/86]
.
[=] [cilium-test-1] Skipping test [pod-to-pod-with-l7-policy-encryption] [46/86] (requires Feature encryption-pod mode wireguard, got disabled)
[=] [cilium-test-1] Test [node-to-node-encryption] [47/86]
...
[=] [cilium-test-1] Skipping test [egress-gateway-excluded-cidrs] [49/86] (Feature enable-ipv4-egress-gateway is disabled)
[=] [cilium-test-1] Skipping test [egress-gateway] [48/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [egress-gateway-with-l7-policy] [50/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [north-south-loadbalancing-with-l7-policy] [52/86] (Feature node-without-cilium is disabled)
[=] [cilium-test-1] Skipping test [pod-to-node-cidrpolicy] [51/86] (Feature cidr-match-nodes is disabled)
[=] [cilium-test-1] Test [echo-ingress-l7] [53/86]
..................
[=] [cilium-test-1] Skipping test [echo-ingress-l7-via-hostport] [54/86] (skipped by condition)
[=] [cilium-test-1] Test [echo-ingress-l7-named-port] [55/86]
..................
[=] [cilium-test-1] Test [client-egress-l7-method] [56/86]
..................
[=] [cilium-test-1] Test [client-egress-l7] [57/86]
...............
[=] [cilium-test-1] Test [client-egress-l7-named-port] [58/86]
...............
[=] [cilium-test-1] Skipping test [client-egress-l7-tls-deny-without-headers] [59/86] (Feature secret-backend-k8s is disabled)
[=] [cilium-test-1] Skipping test [client-egress-l7-tls-headers] [60/86] (Feature secret-backend-k8s is disabled)
[=] [cilium-test-1] Skipping test [client-egress-l7-set-header] [61/86] (Feature secret-backend-k8s is disabled)
[=] [cilium-test-1] Skipping test [echo-ingress-auth-always-fail] [62/86] (Feature mutual-auth-spiffe is disabled)
[=] [cilium-test-1] Skipping test [echo-ingress-mutual-auth-spiffe] [63/86] (Feature mutual-auth-spiffe is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service] [64/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-deny-all] [65/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-deny-ingress-identity] [66/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-deny-backend-service] [67/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [pod-to-ingress-service-allow-ingress-identity] [68/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [outside-to-ingress-service] [69/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [outside-to-ingress-service-deny-world-identity] [70/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [outside-to-ingress-service-deny-cidr] [71/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Skipping test [outside-to-ingress-service-deny-all-ingress] [72/86] (Feature ingress-controller is disabled)
[=] [cilium-test-1] Test [dns-only] [73/86]
...............
[=] [cilium-test-1] Test [to-fqdns] [74/86]
............
[=] [cilium-test-1] Skipping test [pod-to-controlplane-host] [75/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-k8s-on-controlplane] [76/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-controlplane-host-cidr] [77/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [pod-to-k8s-on-controlplane-cidr] [78/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [local-redirect-policy] [79/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [local-redirect-policy-with-node-dns] [80/86] (skipped by condition)
[=] [cilium-test-1] Test [pod-to-pod-no-frag] [81/86]
.
[=] [cilium-test-1] Skipping test [bgp-control-plane-v1] [82/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [host-firewall-ingress] [84/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [bgp-control-plane-v2] [83/86] (skipped by condition)
[=] [cilium-test-1] Skipping test [host-firewall-egress] [85/86] (skipped by condition)
[=] [cilium-test-1] Test [check-log-errors] [86/86]
...................
[cilium-test-1] All 48 tests (474 actions) successful, 38 tests skipped, 0 scenarios skipped.

Conclusion

Hopefully, this post gave you a good overview of deploying a Redhat Openshift Service on an AWS (ROSA) cluster without a preinstalled CNI plugin and then adding Isovalent Enterprise for Cilium as the CNI plugin. If you’d like to learn more, you can schedule a demo with our experts.

Try it Out

Choose your way to explore Cilium with our Cilium learning tracks focusing on features critical to engineers using Cilium in cloud environments. Cilium comes in different flavors, whether using GKE, AKS, Red Hat OpenShift, or EKS; not all of these features will apply to each managed Kubernetes Service. However, it should give you a good idea of features relevant to operating Cilium in cloud environments.

Suggested Reading

Amit Gupta
AuthorAmit GuptaSenior Technical Marketing Engineer

Industry insights you won’t delete. Delivered to your inbox weekly.