title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 2. Installing and configuring Ceph for OpenStack
Chapter 2. Installing and configuring Ceph for OpenStack As a storage administrator, you must install and configure Ceph before the Red Hat OpenStack Platform can use the Ceph block devices. Prerequisites A new or existing Red Hat Ceph Storage cluster. 2.1. Creating Ceph pools for OpenStack You can create Ceph pools for use with OpenStack. By default, Ceph block devices use the rbd pool, but you can use any available pool. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Verify the Red Hat Ceph Storage cluster is running, and is in a HEALTH_OK state: Create the Ceph pools: Example In the above example, 128 is the number of placement groups. Important Red Hat recommends using the Ceph Placement Group's per Pool Calculator to calculate a suitable number of placement groups for the pools. Additional Resources See the Pools chapter in the Storage Strategies guide for more details on creating pools. 2.2. Installing the Ceph client on OpenStack You can install the Ceph client packages on the Red Hat OpenStack Platform to access the Ceph storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes. Procedure On the OpenStack Nova, Cinder, Cinder Backup nodes install the following packages: On the OpenStack Glance host install the python-rbd package: 2.3. Copying the Ceph configuration file to OpenStack Copying the Ceph configuration file to the nova-compute , cinder-backup , cinder-volume , and glance-api nodes. Prerequisites A running Red Hat Ceph Storage cluster. Access to the Ceph software repository. Root-level access to the OpenStack Nova, Cinder, and Glance nodes. Procedure Copy the Ceph configuration file from the Ceph Monitor host to the OpenStack Nova, Cinder, Cinder Backup and Glance nodes: 2.4. Configuring Ceph client authentication You can configure authentication for the Ceph client to access the Red Hat OpenStack Platform. Prerequisites Root-level access to the Ceph Monitor host. A running Red Hat Ceph Storage cluster. Procedure From a Ceph Monitor host, create new users for Cinder, Cinder Backup and Glance: Add the keyrings for client.cinder , client.cinder-backup and client.glance to the appropriate nodes and change their ownership: OpenStack Nova nodes need the keyring file for the nova-compute process: The OpenStack Nova nodes also need to store the secret key of the client.cinder user in libvirt . The libvirt process needs the secret key to access the cluster while attaching a block device from Cinder. Create a temporary copy of the secret key on the OpenStack Nova nodes: If the storage cluster contains Ceph block device images that use the exclusive-lock feature, ensure that all Ceph block device users have permissions to blocklist clients: Return to the OpenStack Nova host: Generate a UUID for the secret, and save the UUID of the secret for configuring nova-compute later: Note You do not necessarily need the UUID on all the Nova compute nodes. However, from a platform consistency perspective, it's better to keep the same UUID. On the OpenStack Nova nodes, add the secret key to libvirt and remove the temporary copy of the key: Set and define the secret for libvirt : Additional Resources See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for more details. See the Configuring the existing ceph storage cluster section in the Integrating an Overcloud with an Existing Red Hat Ceph Cluster Guide for Red Hat OpenStack Platform, to know more about user capabilities.
[ "ceph -s", "ceph osd pool create volumes 128 ceph osd pool create backups 128 ceph osd pool create images 128 ceph osd pool create vms 128", "dnf install python-rbd ceph-common", "dnf install python-rbd", "scp /etc/ceph/ceph.conf OPENSTACK_NODES :/etc/ceph", "ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images' ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'", "ceph auth get-or-create client.cinder | ssh CINDER_VOLUME_NODE sudo tee /etc/ceph/ceph.client.cinder.keyring ssh CINDER_VOLUME_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring ceph auth get-or-create client.cinder-backup | ssh CINDER_BACKUP_NODE tee /etc/ceph/ceph.client.cinder-backup.keyring ssh CINDER_BACKUP_NODE chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring ceph auth get-or-create client.glance | ssh GLANCE_API_NODE sudo tee /etc/ceph/ceph.client.glance.keyring ssh GLANCE_API_NODE chown glance:glance /etc/ceph/ceph.client.glance.keyring", "ceph auth get-or-create client.cinder | ssh NOVA_NODE tee /etc/ceph/ceph.client.cinder.keyring", "ceph auth get-key client.cinder | ssh NOVA_NODE tee client.cinder.key", "ceph auth caps client. ID mon 'allow r, allow command \"osd blacklist\"' osd ' EXISTING_OSD_USER_CAPS '", "ssh NOVA_NODE", "uuidgen > uuid-secret.txt", "cat > secret.xml <<EOF <secret ephemeral='no' private='no'> <uuid>`cat uuid-secret.txt`</uuid> <usage type='ceph'> <name>client.cinder secret</name> </usage> </secret> EOF", "virsh secret-define --file secret.xml virsh secret-set-value --secret USD(cat uuid-secret.txt) --base64 USD(cat client.cinder.key) && rm client.cinder.key secret.xml" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/block_device_to_openstack_guide/installing-and-configuring-ceph-for-openstack
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.21.0_toolset/proc_providing-feedback-on-red-hat-documentation_using-go-toolset
Chapter 8. Installing on GCP
Chapter 8. Installing on GCP 8.1. Preparing to install on GCP 8.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 8.1.2. Requirements for installing OpenShift Container Platform on GCP Before installing OpenShift Container Platform on Google Cloud Platform (GCP), you must create a service account and configure a GCP project. See Configuring a GCP project for details about creating a project, enabling API services, configuring DNS, GCP account limits, and supported GCP regions. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, see Manually creating IAM for GCP for other options. 8.1.3. Choosing a method to install OpenShift Container Platform on GCP You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 8.1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on GCP : You can install OpenShift Container Platform on GCP infrastructure that is provisioned by the OpenShift Container Platform installation program. You can install a cluster quickly by using the default configuration options. Installing a customized cluster on GCP : You can install a customized cluster on GCP infrastructure that the installation program provisions. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on GCP with network customizations : You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on GCP in a restricted network : You can install OpenShift Container Platform on GCP on installer-provisioned infrastructure by using an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. While you can install OpenShift Container Platform by using the mirrored content, your cluster still requires internet access to use the GCP APIs. Installing a cluster into an existing Virtual Private Cloud : You can install OpenShift Container Platform on an existing GCP Virtual Private Cloud (VPC). You can use this installation method if you have constraints set by the guidelines of your company, such as limits on creating new accounts or infrastructure. Installing a private cluster on an existing VPC : You can install a private cluster on an existing GCP VPC. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 8.1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on GCP infrastructure that you provision, by using one of the following methods: Installing a cluster on GCP with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP infrastructure that you provide. You can use the provided Deployment Manager templates to assist with the installation. Installing a cluster with shared VPC on user-provisioned infrastructure in GCP : You can use the provided Deployment Manager templates to create GCP resources in a shared VPC infrastructure. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure : You can install OpenShift Container Platform on GCP in a restricted network with user-provisioned infrastructure. By creating an internal mirror of the installation release content, you can install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 8.1.4. steps Configuring a GCP project 8.2. Configuring a GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 8.2.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 8.2.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You can also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 8.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 8.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 8.2.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 8.2.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 8.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 5 0 Firewall rules Compute Global 11 1 Forwarding rules Compute Global 2 0 In-use global IP addresses Compute Global 4 1 Health checks Compute Global 3 0 Images Compute Global 1 0 Networks Compute Global 2 0 Static IP addresses Compute Region 4 1 Routers Compute Global 1 0 Routes Compute Global 2 0 Subnetworks Compute Global 2 0 Target pools Compute Global 3 0 CPUs Compute Region 28 4 Persistent disk SSD (GB) Compute Region 896 128 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 8.2.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. Create the service account key in JSON format. See Creating service account keys in the GCP documentation. The service account key is required to create a cluster. 8.2.5.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator The roles are applied to the service accounts that the control plane and compute machines use: Table 8.4. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 8.2.5.2. Required GCP permissions for installer-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the installer-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 8.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 8.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 8.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list Example 8.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 8.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 8.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 8.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 8.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 8.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 8.10. Required IAM permissions for installation iam.roles.get Example 8.11. Optional Images permissions for installation compute.images.list Example 8.12. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 8.13. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 8.14. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 8.15. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 8.16. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 8.17. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 8.18. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 8.19. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 8.20. Required Images permissions for deletion compute.images.list 8.2.6. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) 8.2.7. steps Install an OpenShift Container Platform cluster on GCP. You can install a customized cluster or quickly install a cluster with default options. 8.3. Manually creating IAM for GCP In environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace, you can put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 8.3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. If you prefer not to store an administrator-level credential secret in the cluster kube-system project, you can choose one of the following options when installing OpenShift Container Platform: Use manual mode with GCP Workload Identity : You can use the CCO utility ( ccoctl ) to configure the cluster to use manual mode with GCP Workload Identity. When the CCO utility is used to configure the cluster for GCP Workload Identity, it signs service account tokens that provide short-term, limited-privilege security credentials to components. Note This credentials strategy is supported for only new OpenShift Container Platform clusters and must be configured during installation. You cannot reconfigure an existing cluster that uses a different credentials strategy to use this feature. Manage cloud credentials manually : You can set the credentialsMode parameter for the CCO to Manual to manage cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Remove the administrator-level credential secret after installing OpenShift Container Platform with mint mode : If you are using the CCO with the credentialsMode parameter set to Mint , you can remove or rotate the administrator-level credential after installing OpenShift Container Platform. Mint mode is the default configuration for the CCO. This option requires the presence of the administrator-level credential during an installation. The administrator-level credential is used during the installation to mint other credentials with some permissions granted. The original credential secret is not stored in the cluster permanently. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. Additional resources Using manual mode with GCP Workload Identity Rotating or removing cloud provider credentials For a detailed description of all available CCO credential modes and their supported platforms, see About the Cloud Credential Operator . 8.3.2. Manually create IAM The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Change to the directory that contains the installation program and create the install-config.yaml file by running the following command: USD openshift-install create install-config --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 This line is added to set the credentialsMode parameter to Manual . Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=gcp This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-gate: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-gate" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade From the directory that contains the installation program, proceed with your cluster creation: USD openshift-install create cluster --dir <installation_directory> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 8.3.3. Mint mode Mint mode is the default Cloud Credential Operator (CCO) credentials mode for OpenShift Container Platform on platforms that support it. In this mode, the CCO uses the provided administrator-level cloud credential to run the cluster. Mint mode is supported for AWS and GCP. In mint mode, the admin credential is stored in the kube-system namespace and then used by the CCO to process the CredentialsRequest objects in the cluster and create users for each with specific permissions. The benefits of mint mode include: Each cluster component has only the permissions it requires Automatic, on-going reconciliation for cloud credentials, including additional credentials or permissions that might be required for upgrades One drawback is that mint mode requires admin credential storage in a cluster kube-system secret. 8.3.4. Mint mode with removal or rotation of the administrator-level credential Currently, this mode is only supported on AWS and GCP. In this mode, a user installs OpenShift Container Platform with an administrator-level credential just like the normal mint mode. However, this process removes the administrator-level credential secret from the cluster post-installation. The administrator can have the Cloud Credential Operator make its own request for a read-only credential that allows it to verify if all CredentialsRequest objects have their required permissions, thus the administrator-level credential is not required unless something needs to be changed. After the associated credential is removed, it can be deleted or deactivated on the underlying cloud, if desired. Note Prior to a non z-stream upgrade, you must reinstate the credential secret with the administrator-level credential. If the credential is not present, the upgrade might be blocked. The administrator-level credential is not stored in the cluster permanently. Following these steps still requires the administrator-level credential in the cluster for brief periods of time. It also requires manually re-instating the secret with administrator-level credentials for each upgrade. 8.3.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on GCP with default options on installer-provisioned infrastructure Install a cluster with cloud customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure 8.4. Installing a cluster quickly on GCP In OpenShift Container Platform version 4.10, you can install a cluster on Google Cloud Platform (GCP) that uses the default configuration options. 8.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . You have determined that the GCP region to which you are installing supports the N1 machine type. For more information, see the Google documentation . By default, the installation program deploys control plane and compute nodes with the N1 machine type. Note If the region to which you are installing does not support the N1 machine type, you cannot complete the installation using these steps. You must specify a supported machine type in the install-config.yaml file before you install the cluster. For more information, see Installing a cluster on GCP with customizations . 8.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the full path to your service account private key file. USD export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>" Verify that the credentials were applied. USD gcloud auth list steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.4.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. If you provide a name that is longer than 6 characters, only the first 6 characters will be used in the infrastructure ID that is generated from the cluster name. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. 8.4.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.4.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.4.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.4.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 8.5. Installing a cluster on GCP with customizations In OpenShift Container Platform version 4.10, you can install a customized cluster on infrastructure that the installation program provisions on Google Cloud Platform (GCP). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 8.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the full path to your service account private key file. USD export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>" Verify that the credentials were applied. USD gcloud auth list steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.5.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.5.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.5.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.5. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.5.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 8.6. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.5.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.7. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough . Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 8.5.5.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 8.8. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing VPC that you want to deploy your cluster to. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.type The GCP machine type . The GCP machine type. platform.gcp.zones The availability zones where the installation program creates machines for the specified MachinePool. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.controlPlaneSubnet The name of the existing subnet in your VPC that you want to deploy your control plane machines to. The subnet name. platform.gcp.computeSubnet The name of the existing subnet in your VPC that you want to deploy your compute machines to. The subnet name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installer to copy the source image before use. platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.osDisk.diskType The type of disk. Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. The worker nodes can be either type. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. 8.5.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.9. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.4, or RHEL 8.5 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.5.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.21. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 8.5.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 8.5.5.5. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 pullSecret: '{"auths": ...}' 13 fips: false 14 sshKey: ssh-ed25519 AAAA... 15 1 10 11 12 13 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 9 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information on granting the correct permissions for your service account, see "Machine management" "Creating machine sets" "Creating a machine set on GCP". 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 15 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a machine set 8.5.5.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.5.6. Using a GCP Marketplace image If you want to deploy an OpenShift Container Platform cluster using a GCP Marketplace image, you must create the manifests and edit the compute machine set definitions to specify the GCP Marketplace image. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Generate the installation manifests by running the following command: USD openshift-install create manifests --dir <installation_dir> Locate the following files: <installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml <installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-1.yaml <installation_dir>/openshift/99_openshift-cluster-api_worker-machineset-2.yaml In each file, edit the .spec.template.spec.providerSpec.value.disks[0].image property to reference the offer to use: OpenShift Container Platform projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 Example compute machine set with the GCP Marketplace image deletionProtection: false disks: - autoDelete: true boot: true image: projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 labels: null sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n2-standard-4 8.5.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. 8.5.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.5.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.5.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.5.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 8.6. Installing a cluster on GCP with network customizations In OpenShift Container Platform version 4.10, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Google Cloud Platform (GCP). By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 8.6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 8.6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the full path to your service account private key file. USD export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>" Verify that the credentials were applied. USD gcloud auth list steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.6.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.6.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.6.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.10. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.6.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 8.11. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.6.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.12. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough . Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 8.6.5.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 8.13. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing VPC that you want to deploy your cluster to. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.type The GCP machine type . The GCP machine type. platform.gcp.zones The availability zones where the installation program creates machines for the specified MachinePool. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.controlPlaneSubnet The name of the existing subnet in your VPC that you want to deploy your control plane machines to. The subnet name. platform.gcp.computeSubnet The name of the existing subnet in your VPC that you want to deploy your compute machines to. The subnet name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installer to copy the source image before use. platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.osDisk.diskType The type of disk. Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. The worker nodes can be either type. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. 8.6.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.14. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.4, or RHEL 8.5 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.6.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.22. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 8.6.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 8.6.5.5. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 12 region: us-central1 13 pullSecret: '{"auths": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16 1 10 12 13 14 Required. The installation program prompts you for this value. 2 6 11 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 9 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information on granting the correct permissions for your service account, see "Machine management" "Creating machine sets" "Creating a machine set on GCP". 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 8.6.6. Additional resources Enabling customer-managed encryption keys for a machine set 8.6.6.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.6.7. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 8.6.8. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 8.6.9. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 8.6.9.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 8.15. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 8.16. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 8.17. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 8.18. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note Table 8.19. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 8.20. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 8.21. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 8.6.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 8.6.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.6.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.6.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.6.14. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 8.7. Installing a cluster on GCP in a restricted network In OpenShift Container Platform 4.10, you can install a cluster on Google Cloud Platform (GCP) in a restricted network by creating an internal mirror of the installation release content on an existing Google Virtual Private Cloud (VPC). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster will require internet access to use the GCP APIs. 8.7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in GCP. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 8.7.2. About installations in restricted networks In OpenShift Container Platform 4.10, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 8.7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 8.7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the full path to your service account private key file. USD export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>" Verify that the credentials were applied. USD gcloud auth list steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.7.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.7.5.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.7.5.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.22. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.7.5.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 8.23. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.7.5.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.24. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough . Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 8.7.5.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 8.25. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing VPC that you want to deploy your cluster to. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.type The GCP machine type . The GCP machine type. platform.gcp.zones The availability zones where the installation program creates machines for the specified MachinePool. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.controlPlaneSubnet The name of the existing subnet in your VPC that you want to deploy your control plane machines to. The subnet name. platform.gcp.computeSubnet The name of the existing subnet in your VPC that you want to deploy your compute machines to. The subnet name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installer to copy the source image before use. platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.osDisk.diskType The type of disk. Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. The worker nodes can be either type. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. 8.7.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.26. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.4, or RHEL 8.5 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.7.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.23. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 8.7.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 8.7.5.5. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 10 11 12 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 9 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information on granting the correct permissions for your service account, see "Machine management" "Creating machine sets" "Creating a machine set on GCP". 13 Specify the name of an existing VPC. 14 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 15 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 16 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 17 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 19 Provide the contents of the certificate file that you used for your mirror registry. 20 Provide the imageContentSources section from the output of the command to mirror the repository. 8.7.5.6. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 8.7.5.7. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.7.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. 8.7.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.7.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 8.7.9. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 8.7.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.7.11. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster 8.8. Installing a cluster on GCP into an existing VPC In OpenShift Container Platform version 4.10, you can install a cluster into an existing Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 8.8.2. About using a custom VPC In OpenShift Container Platform 4.10, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in Google Cloud Platform (GCP). By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking for the subnets. 8.8.2.1. Requirements for using your VPC The union of the VPC CIDR block and the machine network CIDR must be non-empty. The subnets must be within the machine network. The installation program does not create the following components: NAT gateways Subnets Route tables VPC network Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 8.8.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide one subnet for control-plane machines and one subnet for compute machines. The subnet's CIDRs belong to the machine CIDR that you specified. 8.8.2.3. Division of permissions Some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. 8.8.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 8.8.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.8.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the full path to your service account private key file. USD export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>" Verify that the credentials were applied. USD gcloud auth list steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.8.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.8.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.8.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.8.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.27. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.8.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 8.28. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.8.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.29. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough . Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 8.8.6.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 8.30. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing VPC that you want to deploy your cluster to. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.type The GCP machine type . The GCP machine type. platform.gcp.zones The availability zones where the installation program creates machines for the specified MachinePool. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.controlPlaneSubnet The name of the existing subnet in your VPC that you want to deploy your control plane machines to. The subnet name. platform.gcp.computeSubnet The name of the existing subnet in your VPC that you want to deploy your compute machines to. The subnet name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installer to copy the source image before use. platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.osDisk.diskType The type of disk. Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. The worker nodes can be either type. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. 8.8.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.31. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.4, or RHEL 8.5 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.8.6.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.24. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 8.8.6.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 8.8.6.5. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{"auths": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 1 10 11 12 16 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 9 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information on granting the correct permissions for your service account, see "Machine management" "Creating machine sets" "Creating a machine set on GCP". 13 Specify the name of an existing VPC. 14 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 15 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 17 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 8.8.6.6. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 8.8.7. Additional resources Enabling customer-managed encryption keys for a machine set 8.8.7.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.8.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. 8.8.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.8.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.8.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.8.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 8.9. Installing a private cluster on GCP In OpenShift Container Platform version 4.10, you can install a private cluster into an existing VPC on Google Cloud Platform (GCP). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 8.9.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 8.9.2.1. Private clusters in GCP To create a private cluster on Google Cloud Platform (GCP), you must provide an existing private VPC and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the GCP APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public ingress A public DNS zone that matches the baseDomain for the cluster The installation program does use the baseDomain that you specify to create a private DNS zone and the required records for the cluster. The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify. Because it is not possible to limit access to external load balancers based on source tags, the private cluster uses only internal load balancers to allow access to internal instances. The internal load balancer relies on instance groups rather than the target pools that the network load balancers use. The installation program creates instance groups for each zone, even if there is no instance in that group. The cluster IP address is internal only. One forwarding rule manages both the Kubernetes API and machine config server ports. The backend service is comprised of each zone's instance group and, while it exists, the bootstrap instance group. The firewall uses a single rule that is based on only internal source ranges. 8.9.2.1.1. Limitations No health check for the Machine config server, /healthz , runs because of a difference in load balancer functionality. Two internal load balancers cannot share a single IP address, but two network load balancers can share a single external IP address. Instead, the health of an instance is determined entirely by the /readyz check on port 6443. 8.9.3. About using a custom VPC In OpenShift Container Platform 4.10, you can deploy a cluster into an existing VPC in Google Cloud Platform (GCP). If you do, you must also use existing subnets within the VPC and routing rules. By deploying OpenShift Container Platform into an existing GCP VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself. 8.9.3.1. Requirements for using your VPC The installation program will no longer create the following components: VPC Subnets Cloud router Cloud NAT NAT IP addresses If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VPC options like DHCP, so you must do so before you install the cluster. Your VPC and subnets must meet the following characteristics: The VPC must be in the same GCP project that you deploy the OpenShift Container Platform cluster to. To allow access to the internet from the control plane and compute machines, you must configure cloud NAT on the subnets to allow egress to it. These machines do not have a public address. Even if you do not require access to the internet, you must allow egress to the VPC network to obtain the installation program and images. Because multiple cloud NATs cannot be configured on the shared subnets, the installation program cannot configure it. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist and belong to the VPC that you specified. The subnet CIDRs belong to the machine CIDR. You must provide a subnet to deploy the cluster control plane and compute machines to. You can use the same subnet for both machine types. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. 8.9.3.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or Ingress rules. The GCP credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage, and nodes. 8.9.3.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is preserved by firewall rules that reference the machines in your cluster by the cluster's infrastructure ID. Only traffic within the cluster is allowed. If you deploy multiple clusters to the same VPC, the following components might share access between clusters: The API, which is globally available with an external publishing strategy or available throughout the network in an internal publishing strategy Debugging tools, such as ports on VM instances that are open to the machine CIDR for SSH and ICMP access 8.9.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.9.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the full path to your service account private key file. USD export GOOGLE_APPLICATION_CREDENTIALS="<your_service_account_file>" Verify that the credentials were applied. USD gcloud auth list steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.9.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 8.9.7. Manually creating the installation configuration file For installations of a private OpenShift Container Platform cluster that are only accessible from an internal network and are not visible to the internet, you must manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 8.9.7.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 8.9.7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 8.32. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 8.9.7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 8.33. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 8.9.7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 8.34. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough . Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 8.9.7.1.4. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 8.35. Additional GCP parameters Parameter Description Values platform.gcp.network The name of the existing VPC that you want to deploy your cluster to. String. platform.gcp.region The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . platform.gcp.type The GCP machine type . The GCP machine type. platform.gcp.zones The availability zones where the installation program creates machines for the specified MachinePool. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . platform.gcp.controlPlaneSubnet The name of the existing subnet in your VPC that you want to deploy your control plane machines to. The subnet name. platform.gcp.computeSubnet The name of the existing subnet in your VPC that you want to deploy your compute machines to. The subnet name. platform.gcp.licenses A list of license URLs that must be applied to the compute images. Important The licenses parameter is a deprecated field and nested virtualization is enabled by default. It is not recommended to use this field. Any license available with the license API , such as the license to enable nested virtualization . You cannot use this parameter with a mechanism that generates pre-built images. Using a license URL forces the installer to copy the source image before use. platform.gcp.osDisk.diskSizeGB The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. platform.gcp.osDisk.diskType The type of disk. Either the default pd-ssd or the pd-standard disk type. The control plane nodes must be the pd-ssd disk type. The worker nodes can be either type. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.location For control plane machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. controlPlane.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. compute.platform.gcp.osDisk.encryptionKey.kmsKey.name The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.keyRing For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. compute.platform.gcp.osDisk.encryptionKey.kmsKey.location For compute machines, the GCP location in which the key ring exists. For more information on KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. compute.platform.gcp.osDisk.encryptionKey.kmsKey.projectID For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. 8.9.7.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.36. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.4, or RHEL 8.5 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.9.7.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.25. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 8.9.7.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 8.9.7.5. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{"auths": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 publish: Internal 19 1 10 11 12 16 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 8 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 5 9 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information on granting the correct permissions for your service account, see "Machine management" "Creating machine sets" "Creating a machine set on GCP". 13 Specify the name of an existing VPC. 14 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 15 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 17 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 18 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 19 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 8.9.7.6. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 8.9.8. Additional resources Enabling customer-managed encryption keys for a machine set 8.9.8.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.9.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 8.9.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 8.10. Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates In OpenShift Container Platform version 4.10, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 8.10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Note Be sure to also review this site list if you are configuring a proxy. 8.10.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 8.10.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.10.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 8.10.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 8.10.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You can also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 8.37. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 8.38. Optional API services API service Console service name Cloud Deployment Manager V2 API deploymentmanager.googleapis.com Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 8.10.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 8.10.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 8.39. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 5 0 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 8.10.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. Create the service account key in JSON format. See Creating service account keys in the GCP documentation. The service account key is required to create a cluster. 8.10.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The roles are applied to the service accounts that the control plane and compute machines use: Table 8.40. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 8.10.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 8.26. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 8.27. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 8.28. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 8.29. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 8.30. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 8.31. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 8.32. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 8.33. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 8.34. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 8.35. Required IAM permissions for installation iam.roles.get Example 8.36. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 8.37. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 8.38. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 8.39. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 8.40. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 8.41. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 8.42. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 8.43. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 8.44. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 8.45. Required Images permissions for deletion compute.images.delete compute.images.list Example 8.46. Required permissions to get Region related information compute.regions.get Example 8.47. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list 8.10.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) 8.10.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 8.10.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 8.10.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 8.41. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.4, or RHEL 8.5. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 8.10.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.42. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.4, or RHEL 8.5 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.10.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.48. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 8.10.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 8.10.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 8.10.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.10.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 8.10.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Optional: If you do not want the cluster to provision compute machines, empty the compute pool by editing the resulting install-config.yaml file to set replicas to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.10.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.10.6.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 8.10.7. Exporting common variables 8.10.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 8.10.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Generate the Ignition config files for your cluster. Install the jq package. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 8.10.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 8.10.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 8.49. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 8.10.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 8.10.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 8.10.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 8.43. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.44. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 8.45. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 8.10.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 8.10.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 8.50. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 8.10.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 8.51. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 8.10.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 8.10.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 8.52. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 8.10.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 8.10.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 8.53. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 8.10.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 8.10.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 8.54. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 8.10.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 8.10.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Ensure pyOpenSSL is installed. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 8.10.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 8.55. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 8.10.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 8.10.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 8.56. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 8.10.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 8.10.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 8.10.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 8.57. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 8.10.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.10.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 8.10.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 8.10.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Configure a GCP account. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Create the worker machines. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 8.10.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned GCP infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 8.10.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.10.25. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Configure Global Access for an Ingress Controller on GCP . 8.11. Installing a cluster into a shared VPC on GCP using Deployment Manager templates In OpenShift Container Platform version 4.10, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP) that uses infrastructure that you provide. In this context, a cluster installed into a shared VPC is a cluster that is configured to use a VPC from a project different from where the cluster is being deployed. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IPs from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation. The steps for performing a user-provided infrastructure installation into a shared VPC are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 8.11.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . Note Be sure to also review this site list if you are configuring a proxy. 8.11.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 8.11.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.11.4. Configuring the GCP project that hosts your cluster Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 8.11.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 8.11.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You can also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 8.46. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 8.47. Optional API services API service Console service name Cloud Deployment Manager V2 API deploymentmanager.googleapis.com Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 8.11.4.3. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 8.48. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 5 0 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 8.11.4.4. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. Create the service account key in JSON format. See Creating service account keys in the GCP documentation. The service account key is required to create a cluster. 8.11.4.4.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The roles are applied to the service accounts that the control plane and compute machines use: Table 8.49. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 8.11.4.5. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) 8.11.4.6. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 8.11.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 8.11.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 8.50. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.4, or RHEL 8.5. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 8.11.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.51. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.4, or RHEL 8.5 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.11.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.58. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 8.11.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 8.11.6. Configuring the GCP project that hosts your shared VPC network If you use a shared Virtual Private Cloud (VPC) to host your OpenShift Container Platform cluster in Google Cloud Platform (GCP), you must configure the project that hosts it. Note If you already have a project that hosts the shared VPC network, review this section to ensure that the project meets all of the requirements to install an OpenShift Container Platform cluster. Procedure Create a project to host the shared VPC for your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Create a service account in the project that hosts your shared VPC. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. The service account for the project that hosts the shared VPC network requires the following roles: Compute Network User Compute Security Admin Deployment Manager Editor DNS Administrator Security Admin Network Management Admin 8.11.6.1. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the project that hosts the shared VPC that you install the cluster into. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 8.11.6.2. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Export the following variables required by the resource definition: Export the control plane CIDR: USD export MASTER_SUBNET_CIDR='10.0.0.0/17' Export the compute CIDR: USD export WORKER_SUBNET_CIDR='10.0.128.0/17' Export the region to deploy the VPC network and cluster to: USD export REGION='<region>' Export the variable for the ID of the project that hosts the shared VPC: USD export HOST_PROJECT=<host_project> Export the variable for the email of the service account that belongs to host project: USD export HOST_PROJECT_ACCOUNT=<host_service_account_email> Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the prefix of the network name. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1 1 For <vpc_deployment_name> , specify the name of the VPC to deploy. Export the VPC variable that other components require: Export the name of the host project network: USD export HOST_PROJECT_NETWORK=<vpc_network> Export the name of the host project control plane subnet: USD export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet> Export the name of the host project compute subnet: USD export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet> Set up the shared VPC. See Setting up Shared VPC in the GCP documentation. 8.11.6.2.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 8.59. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 8.11.7. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 8.11.7.1. Manually creating the installation configuration file For user-provisioned installations of OpenShift Container Platform, you manually generate your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Note For some platform types, you can alternatively run ./openshift-install create install-config --dir <installation_directory> to generate an install-config.yaml file. You can provide details about your cluster configuration at the prompts. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 8.11.7.2. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: 5 - hyperthreading: Enabled 6 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 7 region: us-central1 8 pullSecret: '{"auths": ...}' fips: false 9 sshKey: ssh-ed25519 AAAA... 10 publish: Internal 11 1 Specify the public DNS on the host project. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 7 Specify the main project where the VM instances reside. 8 Specify the region that your VPC network is in. 9 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 10 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 11 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . To use a shared VPC in a cluster that uses infrastructure that you provision, you must set publish to Internal . The installation program will no longer be able to access the public DNS zone for the base domain in the host project. 8.11.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.11.7.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Remove the privateZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {} 1 Remove this section completely. Configure the cloud provider for your VPC. Open the <installation_directory>/manifests/cloud-provider-config.yaml file. Add the network-project-id parameter and set its value to the ID of project that hosts the shared VPC network. Add the network-name parameter and set its value to the name of the shared VPC network that hosts the OpenShift Container Platform cluster. Replace the value of the subnetwork-name parameter with the value of the shared VPC subnet that hosts your compute machines. The contents of the <installation_directory>/manifests/cloud-provider-config.yaml resemble the following example: config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet If you deploy a cluster that is not on a private network, open the <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml file and replace the value of the scope parameter with External . The contents of the file resemble the following example: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: '' To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 8.11.8. Exporting common variables 8.11.8.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 8.11.8.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Generate the Ignition config files for your cluster. Install the jq package. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' 1 USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 USD export NETWORK_CIDR='10.0.0.0/16' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` 1 2 Supply the values for the host project. 3 For <installation_directory> , specify the path to the directory that you stored the installation files in. 8.11.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 8.11.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 8.11.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 8.52. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.53. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 8.54. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 8.11.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 8.11.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 8.60. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 8.11.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 8.61. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 8.11.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 8.11.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 8.62. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 8.11.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 8.11.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 8.63. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 8.11.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Assign the permissions that the installation program requires to the service accounts for the subnets that host the control plane and compute subnets: Grant the networkViewer role of the project that hosts your shared VPC to the master service account: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkViewer" Grant the networkUser role to the master service account for the control plane subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the worker service account for the control plane subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_CONTROL_SUBNET}" --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the master service account for the compute subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} Grant the networkUser role to the worker service account for the compute subnet: USD gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding "USD{HOST_PROJECT_COMPUTE_SUBNET}" --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.networkUser" --region USD{REGION} The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 8.11.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 8.64. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 8.11.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 8.11.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Ensure pyOpenSSL is installed. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 8.11.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 8.65. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 8.11.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 8.11.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 8.66. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 8.11.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 8.11.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 8.11.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 8.67. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 8.11.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.10. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.10 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 8.11.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 8.11.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 8.11.22. Adding the ingress DNS records DNS zone configuration is removed when creating Kubernetes manifests and generating Ignition configs. You must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Configure a GCP account. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Create the worker machines. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 8.11.23. Adding ingress firewall rules The cluster requires several firewall rules. If you do not use a shared VPC, these rules are created by the Ingress Controller via the GCP cloud provider. When you use a shared VPC, you can either create cluster-wide firewall rules for all services now or create each rule based on events, when the cluster requests access. By creating each rule when the cluster requests access, you know exactly which firewall rules are required. By creating cluster-wide firewall rules, you can apply the same rule set across multiple clusters. If you choose to create each rule based on events, you must create firewall rules after you provision the cluster and during the life of the cluster when the console notifies you that rules are missing. Events that are similar to the following event are displayed, and you must add the firewall rules that are required: USD oc get events -n openshift-ingress --field-selector="reason=LoadBalancerManualChange" Example output Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description "{\"kubernetes.io/service-name\":\"openshift-ingress/router-default\", \"kubernetes.io/service-ip\":\"35.237.236.234\"}\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project` If you encounter issues when creating these rule-based events, you can configure the cluster-wide firewall rules while your cluster is running. 8.11.23.1. Creating cluster-wide firewall rules for a shared VPC in GCP You can create cluster-wide firewall rules to allow the access that the OpenShift Container Platform cluster requires. Warning If you do not choose to create firewall rules based on cluster events, you must create cluster-wide firewall rules. Prerequisites You exported the variables that the Deployment Manager templates require to deploy your cluster. You created the networking and load balancing components in GCP that your cluster requires. Procedure Add a single firewall rule to allow the Google Cloud Engine health checks to access all of the services. This rule enables the ingress load balancers to determine the health status of their instances. USD gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network="USD{CLUSTER_NETWORK}" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} Add a single firewall rule to allow access to all cluster services: For an external cluster: USD gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="USD{CLUSTER_NETWORK}" --source-ranges="0.0.0.0/0" --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} For a private cluster: USD gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network="USD{CLUSTER_NETWORK}" --source-ranges=USD{NETWORK_CIDR} --target-tags="USD{INFRA_ID}-master,USD{INFRA_ID}-worker" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} Because this rule only allows traffic on TCP ports 80 and 443 , ensure that you add all the ports that your services use. 8.11.24. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned GCP infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 8.11.25. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.11.26. steps Customize your cluster . If necessary, you can opt out of remote health reporting . 8.12. Installing a cluster on GCP in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.10, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide and an internal mirror of the installation release content. Important While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the GCP APIs. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 8.12.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 8.12.2. About installations in restricted networks In OpenShift Container Platform 4.10, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 8.12.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 8.12.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.10, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.12.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 8.12.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 8.12.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You can also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 8.55. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 8.56. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 8.12.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 8.12.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 8.57. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 5 0 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 8.12.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. Create the service account key in JSON format. See Creating service account keys in the GCP documentation. The service account key is required to create a cluster. 8.12.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The roles are applied to the service accounts that the control plane and compute machines use: Table 8.58. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 8.12.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 8.68. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 8.69. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 8.70. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 8.71. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 8.72. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 8.73. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 8.74. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 8.75. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 8.76. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 8.77. Required IAM permissions for installation iam.roles.get Example 8.78. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 8.79. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 8.80. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 8.81. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 8.82. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 8.83. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 8.84. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 8.85. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 8.86. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 8.87. Required Images permissions for deletion compute.images.delete compute.images.list Example 8.88. Required permissions to get Region related information compute.regions.get Example 8.89. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list 8.12.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) 8.12.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 8.12.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 8.12.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 8.59. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.4, or RHEL 8.5. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 8.12.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 8.60. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.4, or RHEL 8.5 [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 8.12.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 8.90. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 8.12.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 8.12.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 8.12.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.10.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 8.12.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 8.12.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.12.6.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 8.12.7. Exporting common variables 8.12.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 8.12.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Generate the Ignition config files for your cluster. Install the jq package. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 8.12.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 8.12.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 8.91. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 8.12.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 8.12.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 8.12.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Table 8.61. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 8.62. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 8.63. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 8.12.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 8.12.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 8.92. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 8.12.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 8.93. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 8.12.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 8.12.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 8.94. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 8.12.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 8.12.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 8.95. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 8.12.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 8.12.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 8.96. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 8.12.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 8.12.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Ensure pyOpenSSL is installed. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 8.12.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 8.97. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 8.12.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 8.12.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 8.98. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 8.12.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 8.12.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure a GCP account. Generate the Ignition config files for your cluster. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 8.12.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 8.99. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 8.12.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 8.12.20. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 8.12.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 8.12.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Configure a GCP account. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Create and configure a VPC and associated subnets in GCP. Create and configure networking and load balancers in GCP. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Create the worker machines. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 8.12.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned GCP infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 8.12.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.10, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.12.25. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster 8.13. Uninstalling a cluster on GCP You can remove a cluster that you deployed to Google Cloud Platform (GCP). 8.13.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 8.13.2. Deleting GCP resources with the Cloud Credential Operator utility To clean up resources after uninstalling an OpenShift Container Platform cluster with the Cloud Credential Operator (CCO) in manual mode with GCP Workload Identity, you can use the CCO utility ( ccoctl ) to remove the GCP resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Install an OpenShift Container Platform cluster with the CCO in manual mode with GCP Workload Identity. Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract --credentials-requests \ --cloud=gcp \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ 1 USDRELEASE_IMAGE 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Delete the GCP resources that ccoctl created: USD ccoctl gcp delete \ --name=<name> \ 1 --project=<gcp_project_id> \ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <gcp_project_id> is the GCP project ID in which to delete cloud resources. Verification To verify that the resources are deleted, query GCP. For more information, refer to GCP documentation.
[ "openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "openshift-install create manifests --dir <installation_directory>", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=gcp", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component-secret> namespace: <component-namespace>", "apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "grep \"release.openshift.io/feature-gate\" *", "0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade", "openshift-install create cluster --dir <installation_directory>", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 pullSecret: '{\"auths\": ...}' 13 fips: false 14 sshKey: ssh-ed25519 AAAA... 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "openshift-install create manifests --dir <installation_dir>", "deletionProtection: false disks: - autoDelete: true boot: true image: projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 labels: null sizeGb: 128 type: pd-ssd kind: GCPMachineProviderSpec machineType: n2-standard-4", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: 11 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 12 region: us-central1 13 pullSecret: '{\"auths\": ...}' 14 fips: false 15 sshKey: ssh-ed25519 AAAA... 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes.", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 additionalTrustBundle: | 19 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 20 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export GOOGLE_APPLICATION_CREDENTIALS=\"<your_service_account_file>\"", "gcloud auth list", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "sshKey: <key1> <key2> <key3>", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 5 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 compute: 6 7 - hyperthreading: Enabled 8 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 9 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id replicas: 3 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 11 region: us-central1 12 network: existing_vpc 13 controlPlaneSubnet: control_plane_subnet 14 computeSubnet: compute_subnet 15 pullSecret: '{\"auths\": ...}' 16 fips: false 17 sshKey: ssh-ed25519 AAAA... 18 publish: Internal 19", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.10.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "export MASTER_SUBNET_CIDR='10.0.0.0/17'", "export WORKER_SUBNET_CIDR='10.0.128.0/17'", "export REGION='<region>'", "export HOST_PROJECT=<host_project>", "export HOST_PROJECT_ACCOUNT=<host_service_account_email>", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: '<prefix>' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create <vpc_deployment_name> --config 01_vpc.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} 1", "export HOST_PROJECT_NETWORK=<vpc_network>", "export HOST_PROJECT_CONTROL_SUBNET=<control_plane_subnet>", "export HOST_PROJECT_COMPUTE_SUBNET=<compute_subnet>", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: 5 - hyperthreading: Enabled 6 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 0 metadata: name: test-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 7 region: us-central1 8 pullSecret: '{\"auths\": ...}' fips: false 9 sshKey: ssh-ed25519 AAAA... 10 publish: Internal 11", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone status: {}", "config: |+ [global] project-id = example-project regional = true multizone = true node-tags = opensh-ptzzx-master node-tags = opensh-ptzzx-worker node-instance-prefix = opensh-ptzzx external-instance-groups-prefix = opensh-ptzzx network-project-id = example-shared-vpc network-name = example-network subnetwork-name = example-worker-subnet", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External type: LoadBalancerService status: availableReplicas: 0 domain: '' selector: ''", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' 1 export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' 2 export NETWORK_CIDR='10.0.0.0/16' export KUBECONFIG=<installation_directory>/auth/kubeconfig 3 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json`", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{HOST_PROJECT_NETWORK} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_CONTROL_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} projects add-iam-policy-binding USD{HOST_PROJECT} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkViewer\"", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_CONTROL_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT} compute networks subnets add-iam-policy-binding \"USD{HOST_PROJECT_COMPUTE_SUBNET}\" --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkUser\" --region USD{REGION}", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{HOST_PROJECT_COMPUTE_SUBNET} --region=USD{REGION} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} --project USD{HOST_PROJECT} --account USD{HOST_PROJECT_ACCOUNT}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "oc get events -n openshift-ingress --field-selector=\"reason=LoadBalancerManualChange\"", "Firewall change required by security admin: `gcloud compute firewall-rules create k8s-fw-a26e631036a3f46cba28f8df67266d55 --network example-network --description \"{\\\"kubernetes.io/service-name\\\":\\\"openshift-ingress/router-default\\\", \\\"kubernetes.io/service-ip\\\":\\\"35.237.236.234\\\"}\\\" --allow tcp:443,tcp:80 --source-ranges 0.0.0.0/0 --target-tags exampl-fqzq7-master,exampl-fqzq7-worker --project example-project`", "gcloud compute firewall-rules create --allow='tcp:30000-32767,udp:30000-32767' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges='130.211.0.0/22,35.191.0.0/16,209.85.152.0/22,209.85.204.0/22' --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress-hc --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=\"0.0.0.0/0\" --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "gcloud compute firewall-rules create --allow='tcp:80,tcp:443' --network=\"USD{CLUSTER_NETWORK}\" --source-ranges=USD{NETWORK_CIDR} --target-tags=\"USD{INFRA_ID}-master,USD{INFRA_ID}-worker\" USD{INFRA_ID}-ingress --account=USD{HOST_PROJECT_ACCOUNT} --project=USD{HOST_PROJECT}", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.10.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal-backend-service --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.23.0 master-1 Ready master 63m v1.23.0 master-2 Ready master 64m v1.23.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.23.0 master-1 Ready master 73m v1.23.0 master-2 Ready master 74m v1.23.0 worker-0 Ready worker 11m v1.23.0 worker-1 Ready worker 11m v1.23.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com grafana-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --credentials-requests --cloud=gcp --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 USDRELEASE_IMAGE", "ccoctl gcp delete --name=<name> \\ 1 --project=<gcp_project_id> \\ 2 --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/installing/installing-on-gcp
Chapter 3. Using the Block Storage backup service
Chapter 3. Using the Block Storage backup service You can use the Block Storage backup service to perform full or incremental backups, and to restore a backup to a volume. 3.1. Full backups 3.1.1. Creating a full volume backup To back up a volume, use the cinder backup-create command. By default, this command creates a full backup of the volume. If the volume has existing backups, you can choose to create an incremental backup instead. For more information, see Section 3.2.2, "Performing incremental backups" . Note Prior to Red Hat OpenStack Platform version 16, the cinder backup-create command created incremental backups after the first full Ceph volume backup to a Ceph Storage back end. In RHOSP version 16 and later, you must use the --incremental option to create incremental volume backups. If the --incremental option is not used with the cinder backup-create command, the default setting creates full backups. For more information, see Section 3.2.2, "Performing incremental backups" . You can create backups of volumes you have access to. This means that users with administrative privileges can back up any volume, regardless of owner. For more information, see Section 3.1.2, "Creating a volume backup as an admin" . Procedure View the ID or Display Name of the volume you want to back up: Back up the volume: Replace VOLUME with the ID or Display Name of the volume you want to back up. For example: The volume_id of the resulting backup is identical to the ID of the source volume. Verify that the volume backup creation is complete: The volume backup creation is complete when the Status of the backup entry is available. 3.1.2. Creating a volume backup as an admin Users with administrative privileges can back up any volume managed by Red Hat OpenStack Platform. When an admin user backs up a volume that is owned by a non-admin user, the backup is hidden from the volume owner by default. Procedure As an admin user, you can use the following command to back up a volume and make the backup available to a specific project: Replace the following variables according to your environment requirements: <PROJECTNAME> is the name of the project (tenant) where you want to make the backup available. <USERNAME> and <PASSWD> are the username and password credentials of a user within <PROJECTNAME>. <VOLUME> is the name or ID of the volume that you want to back up. <KEYSTONEURL> is the URL endpoint of the Identity service, which is typically http://<IP>:5000/v2 where <IP> is the IP address of the Identity service host. When you perform this operation, the size of the resulting backup counts against the quota of <PROJECTNAME> rather than the quota of the project admin. 3.1.3. Exporting the metadata of a volume backup You can export and store the metadata of a volume backup so that you can restore the volume backup even if the Block Storage database suffers a catastrophic loss. Procedure Run the following command: Replace <BACKUPID> with the ID or name of the volume backup: The volume backup metadata consists of the backup_service and backup_url values. 3.1.4. Backing up an in-use volume You can create a backup of an in-use volume with the --force option when the Block Storage back end snapshot is supported. Note To use the --force option, the Block Storage back end snapshot must be supported. You can verify snapshot support by checking the documentation for the back end that you are using. By using the --force option, you acknowledge that you are not quiescing the drive before performing the backup. Using this method creates a crash-consistent, but not application-consistent, backup. This means that the backup does not have an awareness of which applications were running when the backup was performed. However, the data is intact. Procedure To create a backup of an in-use volume, run: 3.1.5. Backing up a snapshot You can create a full backup from a snapshot using the volume ID that is associated with the snapshot. Procedure Locate the snapshot ID of the snapshot to backup using cinder snapshot list . If the snapshot is named, then you can use the following example to locate the ID : Create the backup of a snapshot: 3.2. Incremental backups Using the Block Storage backup service, you can perform incremental backups. 3.2.1. Performance considerations Some backup features like incremental and data compression can impact performance. Incremental backups have a performance impact because all of the data in a volume must be read and checksummed for both the full and each incremental backup. You can use data compression with non-Ceph backends. Enabling data compression requires additional CPU power but uses less network bandwidth and storage space overall. The multipathing configuration also impacts performance. If you attach multiple volumes without enabling multipathing, you might not be able to connect or have full network capabilities, which impacts performance. You can use the advanced configuration options to enable or disable compression, define the number of processes, and add additional CPU resources. For more information, see Appendix B, Advanced Block Storage backup configuration options . 3.2.1.1. Impact of backing up from a snapshot Some back ends support creating a backup from a snapshot. A driver that supports this feature can directly attach a snapshot, which is faster than cloning the snapshot into a volume to be able to attach to it. In general, this feature can affect performance because of the extra step of creating the volume from a snapshot. 3.2.2. Performing incremental backups By default, the cinder backup-create command creates a full backup of a volume. However, if the volume has existing backups, you can create an incremental backup. Incremental backups are fully supported on NFS, Object Storage (swift), and Red Hat Ceph Storage backup repositories. An incremental backup captures any changes to the volume since the last full or incremental backup. Performing numerous, regular, full backups of a volume can become resource intensive because the size of the volume increases over time. With incremental backups, you can capture periodic changes to volumes and minimize resource usage. Procedure To create an incremental volume backup, use the --incremental option with the following command: Replace VOLUME with the ID or Display Name of the volume you want to back up. Note You cannot delete a full backup if it already has an incremental backup. If a full backup has multiple incremental backups, you can only delete the latest one. 3.3. Canceling a backup To cancel a backup, an administrator must request a force delete on the backup. Important This operation is not supported if you use the Ceph or RBD back ends. Procedure Run the following command: After you complete the cancellation and the backup no longer appears in the backup listings, there can be a slight delay for the backup to be successfully canceled. To verify that the backup is successfully canceled, the backing-up status in the source resource stops. Note Before Red Hat OpenStack version 12, the backing-up status was stored in the volume, even when backing up a snapshot. Therefore, when backing up a snapshot, any delete operation on the snapshot that followed a cancellation could result in an error if the snapshot was still mapped. In Red Hat OpenStack Platform version 13 and later, ongoing restoration operations can be canceled on any of the supported backup drivers. 3.4. Viewing and modifying project backup quota Normally, you can use the dashboard to modify project storage quotas, for example, the number of volumes, volume storage, snapshots, or other operational limits that a project can have. However, the functionality to modify backup quotas with the dashboard is not yet available. You must use the command-line interface to modify backup quotas. Procedure To view the storage quotas of a specific project (tenant), enter the following command: Update the maximum number of backups, <MAXNUM> , that can be created in a specific project: Update the maximum total size of all backups, <MAXGB> , within a specific project: View the storage quota usage of a specific project: 3.5. Restoring from backups 3.5.1. Restoring a volume from a backup To create a new volume from a backup, complete the following steps. Procedure Find the ID of the volume backup you want to use: Ensure that the Volume ID matches the ID of the volume that you want to restore. Restore the volume backup: Replace BACKUP_ID with the ID of the volume backup you want to use. If you no longer need the backup, delete it: If you need to restore a backed up volume to a volume of a particular type, use the --volume option to restore a backup to a specific volume: Note If you restore a volume from an encrypted backup, then the destination volume type must also be encrypted. 3.5.2. Restoring a volume after a Block Storage database loss When a Block Storage database loss occurs, you cannot restore a volume backup because the database contains metadata that the volume backup service requires. However, after you create the volume backup, you can export and store the metadata, which consists of backup_service and backup_url values, so that when a database loss occurs, you can restore the volume backup. For more information see Section 3.1.1, "Creating a full volume backup" ). If you exported and stored this metadata, then you can import it to a new Block Storage database, which allows you to restore the volume backup. Note For incremental backups, you must import all exported data before you can restore one of the incremental backups. Procedure As a user with administrative privileges, run the following command: Replace backup_service and backup_url with the metadata you exported. For example, using the exported metadata from Section 3.1.1, "Creating a full volume backup" : After you import the metadata into the Block Storage service database, you can restore the volume as normal, see Section 3.5.1, "Restoring a volume from a backup" . 3.5.3. Canceling a backup restore To cancel a backup restore operation, alter the status of the backup to anything other than restoring . You can use the error state to minimize confusion regarding whether the restore was successful or not. Alternatively, you can change the value to available . Note Backup cancellation is an asynchronous action, because the backup driver must detect the status change before it cancels the backup. When the status changes to available in the destination volume, the cancellation is complete. Note This feature is not currently available on RBD backups. Warning If a restore operation is canceled after it starts, the destination volume is useless, because there is no way of knowing how much data, if any, was actually restored. 3.6. Troubleshooting There are two common scenarios that cause many of the issues that occur with the backup service: When the cinder-backup service starts, it connects to its configured backend and uses this as a target for backups. Problems with this connection can cause services to fail. When backups are requested, the backup service connects to the volume service and attaches the requested volume. Problems with this connection are evident only during backup time. In either case, the logs contain messages that describe the error. For more information about log files and services, see Log Files for OpenStack Services in the Logging, Monitoring and Troubleshooting Guide . For more general information about log locations, and troubleshooting suggestions, see Block Storage (cinder) Log Files in the Logging, Monitoring and Troubleshooting Guide . 3.6.1. Verifying services You can diagnose many issues by verifying that services are available and by checking log files for error messages. For more information about the key services and their interactions, see Section 1.2, "How backups and restores work" . After you verify the status of the services, check the cinder-backup.log file. The Block Storage Backup service log is located in /var/log/containers/cinder]/cinder-backup.log . Procedure Run the cinder show command on the volume to see if it is stored by the host: Run the cinder service-list command to view running services: Verify that the expected services are available. 3.6.2. Troubleshooting tips Backups are asynchronous. The Block Storage backup service performs a small number of static checks upon receiving an API request, such as checking for an invalid volume reference ( missing ) or a volume that is in-use or attached to an instance. The in-use case requires you to use the --force option. Note Using the --force option means that I/O is not be quiesced and the resulting volume image may be corrupt. If the API accepts the request, the backup occurs in the background. Usually, the CLI returns immediately even if the backup fails or is approaching failure. You can query the status of a backup by using the cinder backup API. If an error occurs, review the logs to discover the cause. 3.6.3. Pacemaker By default, Pacemaker deploys the Block Storage backup service. Pacemaker configures virtual IP addresses, containers, services, and other features as resources in a cluster to ensure that the defined set of OpenStack cluster resources are running and available. When a service or an entire node in a cluster fails, Pacemaker can restart the resource, take the node out of the cluster, or reboot the node. Requests to most services are through HAProxy For information about how to use Pacemaker for troubleshooting, see Managing high availability services with Pacemaker in the High Availability Deployment and Usage guide.
[ "cinder list", "cinder backup-create _VOLUME_", "+-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | e9d15fc7-eeae-4ca4-aa72-d52536dc551d | | name | None | | volume_id | 5f75430a-abff-4cc7-b74e-f808234fa6c5 | +-----------+--------------------------------------+", "cinder backup-list", "cinder --os-auth-url <KEYSTONEURL> --os-tenant-name <PROJECTNAME> --os-username <USERNAME> --os-password <PASSWD> backup-create <VOLUME>", "cinder backup-export _BACKUPID_", "+----------------+------------------------------------------+ | Property | Value | +----------------+------------------------------------------+ | backup_service | cinder.backup.drivers.swift | | backup_url | eyJzdGF0dXMiOiAiYXZhaWxhYmxlIiwgIm9iam...| | | ...4NS02ZmY4MzBhZWYwNWUiLCAic2l6ZSI6IDF9 | +----------------+------------------------------------------+", "cinder backup-create _VOLUME_ --incremental --force", "cinder snapshot-list --volume-id _VOLUME_ID_", "cinder snapshot-show _SNAPSHOT_NAME_", "cinder backup-create _VOLUME_ --snapshot-id=_SNAPSHOT_ID_", "cinder backup-create _VOLUME_ --incremental", "openstack volume backup delete --force <backup>", "cinder quota-show <PROJECT_ID>", "cinder quota-update --backups <MAXNUM> <PROJECT_ID>", "cinder quota-update --backup-gigabytes MAXGB <PROJECT_ID>", "cinder quota-usage <PROJECT_ID>", "cinder backup-list", "cinder backup-restore _BACKUP_ID_", "cinder backup-delete _BACKUP_ID_", "cinder backup-restore _BACKUP_ID --volume VOLUME_ID_", "cinder backup-import _backup_service_ _backup_url_", "cinder backup-import cinder.backup.drivers.swift eyJzdGF0dXMi...c2l6ZSI6IDF9 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | id | 77951e2f-4aff-4365-8c64-f833802eaa43 | | name | None | +----------+--------------------------------------+", "openstack volume backup set --state error BACKUP_ID", "cinder show", "cinder service-list +------------------+--------------------+------+---------+-------+----------------------------+-----------------+ | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +------------------+--------------------+------+---------+-------+----------------------------+-----------------+ | cinder-backup | hostgroup | nova | enabled | up | 2017-05-15T02:42:25.000000 | - | | cinder-scheduler | hostgroup | nova | enabled | up | 2017-05-15T02:42:25.000000 | - | | cinder-volume | hostgroup@sas-pool | nova | enabled | down | 2017-05-14T03:04:01.000000 | - | | cinder-volume | hostgroup@ssd-pool | nova | enabled | down | 2017-05-14T03:04:01.000000 | - | +------------------+--------------------+------+---------+-------+----------------------------+-----------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/block_storage_backup_guide/using-cinder-backup
9.7. Setting the Highest TLS Encryption Protocol Version
9.7. Setting the Highest TLS Encryption Protocol Version To set the highest TLS protocol version Directory Server supports, enter: If you set the parameter to a value lower than in sslVersionMin , then Directory Server sets sslVersionMax to the same value as sslVersionMin . Important To always use the strongest supported encryption protocol version in the sslVersionMax parameter, do not set this parameter.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com security set --tls-protocol-max=\" protocol_version \"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/settings-the-highest-tls-encryption-protocol-version
Appendix B. Using Red Hat Maven repositories
Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat
[ "/home/ <username> /.m2/settings.xml", "C:\\Users\\<username>\\.m2\\settings.xml", "<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>", "<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>", "<repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_jms_pool_library/using_red_hat_maven_repositories
Chapter 4. Recovering from data loss with VM snapshots
Chapter 4. Recovering from data loss with VM snapshots If a data loss event occurs, you can restore a Virtual Machine (VM) snapshot of a Certificate Authority (CA) replica to repair the lost data, or deploy a new environment from it. 4.1. Recovering from only a VM snapshot If a disaster affects all IdM servers, and only a snapshot of an IdM CA replica virtual machine (VM) is left, you can recreate your deployment by removing all references to the lost servers and installing new replicas. Prerequisites You have prepared a VM snapshot of a CA replica VM. See Preparing for data loss with VM snapshots . Procedure Boot the desired snapshot of the CA replica VM. Remove replication agreements to any lost replicas. Install a second CA replica. See Installing an IdM replica . The VM CA replica is now the CA renewal server. Red Hat recommends promoting another CA replica in the environment to act as the CA renewal server. See Changing and resetting IdM CA renewal server . Recreate the desired replica topology by deploying additional replicas with the desired services (CA, DNS). See Installing an IdM replica Update DNS to reflect the new replica topology. If IdM DNS is used, DNS service records are updated automatically. Verify that IdM clients can reach the IdM servers. See Adjusting IdM Clients during recovery . Verification Test the Kerberos server on every replica by successfully retrieving a Kerberos ticket-granting ticket as an IdM user. Test the Directory Server and SSSD configuration on every replica by retrieving user information. Test the CA server on every CA replica with the ipa cert-show command. Additional resources Planning the replica topology 4.2. Recovering from a VM snapshot among a partially-working environment If a disaster affects some IdM servers while others are still operating properly, you may want to restore the deployment to the state captured in a Virtual Machine (VM) snapshot. For example, if all Certificate Authority (CA) Replicas are lost while other replicas are still in production, you will need to bring a CA Replica back into the environment. In this scenario, remove references to the lost replicas, restore the CA replica from the snapshot, verify replication, and deploy new replicas. Prerequisites You have prepared a VM snapshot of a CA replica VM. See Preparing for data loss with VM snapshots . Procedure Remove all replication agreements to the lost servers. See Uninstalling an IdM server . Boot the desired snapshot of the CA replica VM. Remove any replication agreements between the restored server and any lost servers. If the restored server does not have replication agreements to any of the servers still in production, connect the restored server with one of the other servers to update the restored server. Review Directory Server error logs at /var/log/dirsrv/slapd-YOUR-INSTANCE/errors to see if the CA replica from the snapshot correctly synchronizes with the remaining IdM servers. If replication on the restored server fails because its database is too outdated, reinitialize the restored server. If the database on the restored server is correctly synchronized, continue by deploying additional replicas with the desired services (CA, DNS) according to Installing an IdM replica . Verification Test the Kerberos server on every replica by successfully retrieving a Kerberos ticket-granting ticket as an IdM user. Test the Directory Server and SSSD configuration on every replica by retrieving user information. Test the CA server on every CA replica with the ipa cert-show command. Additional resources Recovering from a VM snapshot to establish a new IdM environment 4.3. Recovering from a VM snapshot to establish a new IdM environment If the Certificate Authority (CA) replica from a restored Virtual Machine (VM) snapshot is unable to replicate with other servers, create a new IdM environment from the VM snapshot. To establish a new IdM environment, isolate the VM server, create additional replicas from it, and switch IdM clients to the new environment. Prerequisites You have prepared a VM snapshot of a CA replica VM. See Preparing for data loss with VM snapshots . Procedure Boot the desired snapshot of the CA replica VM. Isolate the restored server from the rest of the current deployment by removing all of its replication topology segments. First, display all domain replication topology segments. , delete every domain topology segment involving the restored server. Finally, perform the same actions with any ca topology segments. Install a sufficient number of IdM replicas from the restored server to handle the deployment load. There are now two disconnected IdM deployments running in parallel. Switch the IdM clients to use the new deployment by hard-coding references to the new IdM replicas. See Adjusting IdM clients during recovery . Stop and uninstall IdM servers from the deployment. See Uninstalling an IdM server . Verification Test the Kerberos server on every new replica by successfully retrieving a Kerberos ticket-granting ticket as an IdM user. Test the Directory Server and SSSD configuration on every new replica by retrieving user information. Test the CA server on every new CA replica with the ipa cert-show command.
[ "ipa server-del lost-server1.example.com ipa server-del lost-server2.example.com", "kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 15:51:37 11/01/2019 15:51:02 HTTP/[email protected] 10/31/2019 15:51:08 11/01/2019 15:51:02 krbtgt/[email protected]", "ipa user-show admin User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash Principal alias: [email protected] UID: 1965200000 GID: 1965200000 Account disabled: False Password: True Member of groups: admins, trust admins Kerberos keys available: True", "ipa cert-show 1 Issuing CA: ipa Certificate: MIIEgjCCAuqgAwIBAgIjoSIP Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Thu Oct 31 19:43:29 2019 UTC Not After: Mon Oct 31 19:43:29 2039 UTC Serial number: 1 Serial number (hex): 0x1 Revoked: False", "ipa server-del lost-server1.example.com ipa server-del lost-server2.example.com", "ipa topologysegment-add Suffix name: domain Left node: restored-CA-replica.example.com Right node: server3.example.com Segment name [restored-CA-replica.com-to-server3.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: restored-CA-replica.example.com Right node: server3.example.com Connectivity: both", "ipa-replica-manage re-initialize --from server2.example.com", "kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 15:51:37 11/01/2019 15:51:02 HTTP/[email protected] 10/31/2019 15:51:08 11/01/2019 15:51:02 krbtgt/[email protected]", "ipa user-show admin User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash Principal alias: [email protected] UID: 1965200000 GID: 1965200000 Account disabled: False Password: True Member of groups: admins, trust admins Kerberos keys available: True", "ipa cert-show 1 Issuing CA: ipa Certificate: MIIEgjCCAuqgAwIBAgIjoSIP Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Thu Oct 31 19:43:29 2019 UTC Not After: Mon Oct 31 19:43:29 2039 UTC Serial number: 1 Serial number (hex): 0x1 Revoked: False", "ipa topologysegment-find Suffix name: domain ------------------ 8 segments matched ------------------ Segment name: new_segment Left node: restored-CA-replica.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 8 ----------------------------", "ipa topologysegment-del Suffix name: domain Segment name: new_segment ----------------------------- Deleted segment \"new_segment\" -----------------------------", "ipa topologysegment-find Suffix name: ca ------------------ 1 segments matched ------------------ Segment name: ca_segment Left node: restored-CA-replica.example.com Right node: server4.example.com Connectivity: both ---------------------------- Number of entries returned 1 ---------------------------- ipa topologysegment-del Suffix name: ca Segment name: ca_segment ----------------------------- Deleted segment \"ca_segment\" -----------------------------", "kinit admin Password for [email protected]: klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 10/31/2019 15:51:37 11/01/2019 15:51:02 HTTP/[email protected] 10/31/2019 15:51:08 11/01/2019 15:51:02 krbtgt/[email protected]", "ipa user-show admin User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash Principal alias: [email protected] UID: 1965200000 GID: 1965200000 Account disabled: False Password: True Member of groups: admins, trust admins Kerberos keys available: True", "ipa cert-show 1 Issuing CA: ipa Certificate: MIIEgjCCAuqgAwIBAgIjoSIP Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Thu Oct 31 19:43:29 2019 UTC Not After: Mon Oct 31 19:43:29 2039 UTC Serial number: 1 Serial number (hex): 0x1 Revoked: False" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/performing_disaster_recovery_with_identity_management/recovering-from-data-loss-with-snapshots_performing-disaster-recovery
Chapter 7. Working with containers
Chapter 7. Working with containers 7.1. Understanding Containers The basic units of OpenShift Container Platform applications are called containers . Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service (often called a "micro-service"), such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. OpenShift Container Platform and Kubernetes add the ability to orchestrate containers across multi-host installations. 7.1.1. About containers and RHEL kernel memory Due to Red Hat Enterprise Linux (RHEL) behavior, a container on a node with high CPU usage might seem to consume more memory than expected. The higher memory consumption could be caused by the kmem_cache in the RHEL kernel. The RHEL kernel creates a kmem_cache for each cgroup. For added performance, the kmem_cache contains a cpu_cache , and a node cache for any NUMA nodes. These caches all consume kernel memory. The amount of memory stored in those caches is proportional to the number of CPUs that the system uses. As a result, a higher number of CPUs results in a greater amount of kernel memory being held in these caches. Higher amounts of kernel memory in these caches can cause OpenShift Container Platform containers to exceed the configured memory limits, resulting in the container being killed. To avoid losing containers due to kernel memory issues, ensure that the containers request sufficient memory. You can use the following formula to estimate the amount of memory consumed by the kmem_cache , where nproc is the number of processing units available that are reported by the nproc command. The lower limit of container requests should be this value plus the container memory requirements: USD(nproc) X 1/2 MiB 7.1.2. About the container engine and container runtime A container engine is a piece of software that processes user requests, including command line options and image pulls. The container engine uses a container runtime , also called a lower-level container runtime , to run and manage the components required to deploy and operate containers. You likely will not need to interact with the container engine or container runtime. Note The OpenShift Container Platform documentation uses the term container runtime to refer to the lower-level container runtime. Other documentation can refer to the container engine as the container runtime. OpenShift Container Platform uses CRI-O as the container engine and runC or crun as the container runtime. The default container runtime is runC. Both container runtimes adhere to the Open Container Initiative (OCI) runtime specifications. CRI-O is a Kubernetes-native container engine implementation that integrates closely with the operating system to deliver an efficient and optimized Kubernetes experience. The CRI-O container engine runs as a systemd service on each OpenShift Container Platform cluster node. runC, developed by Docker and maintained by the Open Container Project, is a lightweight, portable container runtime written in Go. crun, developed by Red Hat, is a fast and low-memory container runtime fully written in C. As of OpenShift Container Platform 4.15, you can select between the two. crun has several improvements over runC, including: Smaller binary Quicker processing Lower memory footprint runC has some benefits over crun, including: Most popular OCI container runtime. Longer tenure in production. Default container runtime of CRI-O. You can move between the two container runtimes as needed. For information on setting which container runtime to use, see Creating a ContainerRuntimeConfig CR to edit CRI-O parameters . 7.2. Using Init Containers to perform tasks before a pod is deployed OpenShift Container Platform provides init containers , which are specialized containers that run before application containers and can contain utilities or setup scripts not present in an app image. 7.2.1. Understanding Init Containers You can use an Init Container resource to perform tasks before the rest of a pod is deployed. A pod can have Init Containers in addition to application containers. Init containers allow you to reorganize setup scripts and binding code. An Init Container can: Contain and run utilities that are not desirable to include in the app Container image for security reasons. Contain utilities or custom code for setup that is not present in an app image. For example, there is no requirement to make an image FROM another image just to use a tool like sed, awk, python, or dig during setup. Use Linux namespaces so that they have different filesystem views from app containers, such as access to secrets that application containers are not able to access. Each Init Container must complete successfully before the one is started. So, Init Containers provide an easy way to block or delay the startup of app containers until some set of preconditions are met. For example, the following are some ways you can use Init Containers: Wait for a service to be created with a shell command like: for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1 Register this pod with a remote server from the downward API with a command like: USD curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()' Wait for some time before starting the app Container with a command like sleep 60 . Clone a git repository into a volume. Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app Container. For example, place the POD_IP value in a configuration and generate the main app configuration file using Jinja. See the Kubernetes documentation for more information. 7.2.2. Creating Init Containers The following example outlines a simple pod which has two Init Containers. The first waits for myservice and the second waits for mydb . After both containers complete, the pod begins. Procedure Create the pod for the Init Container: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod: USD oc create -f myapp.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s The pod status, Init:0/2 , indicates it is waiting for the two services. Create the myservice service. Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376 Create the pod: USD oc create -f myservice.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s The pod status, Init:1/2 , indicates it is waiting for one service, in this case the mydb service. Create the mydb service: Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377 Create the pod: USD oc create -f mydb.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m The pod status indicated that it is no longer waiting for the services and is running. 7.3. Using volumes to persist container data Files in a container are ephemeral. As such, when a container crashes or stops, the data is lost. You can use volumes to persist the data used by the containers in a pod. A volume is directory, accessible to the Containers in a pod, where data is stored for the life of the pod. 7.3.1. Understanding volumes Volumes are mounted file systems available to pods and their containers which may be backed by a number of host-local or network attached storage endpoints. Containers are not persistent by default; on restart, their contents are cleared. To ensure that the file system on the volume contains no errors and, if errors are present, to repair them when possible, OpenShift Container Platform invokes the fsck utility prior to the mount utility. This occurs when either adding a volume or updating an existing volume. The simplest volume type is emptyDir , which is a temporary directory on a single machine. Administrators may also allow you to request a persistent volume that is automatically attached to your pods. Note emptyDir volume storage may be restricted by a quota based on the pod's FSGroup, if the FSGroup parameter is enabled by your cluster administrator. 7.3.2. Working with volumes using the OpenShift Container Platform CLI You can use the CLI command oc set volume to add and remove volumes and volume mounts for any object that has a pod template like replication controllers or deployment configs. You can also list volumes in pods or any object that has a pod template. The oc set volume command uses the following general syntax: USD oc set volume <object_selection> <operation> <mandatory_parameters> <options> Object selection Specify one of the following for the object_selection parameter in the oc set volume command: Table 7.1. Object Selection Syntax Description Example <object_type> <name> Selects <name> of type <object_type> . deploymentConfig registry <object_type> / <name> Selects <name> of type <object_type> . deploymentConfig/registry <object_type> --selector= <object_label_selector> Selects resources of type <object_type> that matched the given label selector. deploymentConfig --selector="name=registry" <object_type> --all Selects all resources of type <object_type> . deploymentConfig --all -f or --filename= <file_name> File name, directory, or URL to file to use to edit the resource. -f registry-deployment-config.json Operation Specify --add or --remove for the operation parameter in the oc set volume command. Mandatory parameters Any mandatory parameters are specific to the selected operation and are discussed in later sections. Options Any options are specific to the selected operation and are discussed in later sections. 7.3.3. Listing volumes and volume mounts in a pod You can list volumes and volume mounts in pods or pod templates: Procedure To list volumes: USD oc set volume <object_type>/<name> [options] List volume supported options: Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' For example: To list all volumes for pod p1 : USD oc set volume pod/p1 To list volume v1 defined on all deployment configs: USD oc set volume dc --all --name=v1 7.3.4. Adding volumes to a pod You can add volumes and volume mounts to a pod. Procedure To add a volume, a volume mount, or both to pod templates: USD oc set volume <object_type>/<name> --add [options] Table 7.2. Supported Options for Adding Volumes Option Description Default --name Name of the volume. Automatically generated, if not specified. -t, --type Name of the volume source. Supported values: emptyDir , hostPath , secret , configmap , persistentVolumeClaim or projected . emptyDir -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' -m, --mount-path Mount path inside the selected containers. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --path Host path. Mandatory parameter for --type=hostPath . Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --secret-name Name of the secret. Mandatory parameter for --type=secret . --configmap-name Name of the configmap. Mandatory parameter for --type=configmap . --claim-name Name of the persistent volume claim. Mandatory parameter for --type=persistentVolumeClaim . --source Details of volume source as a JSON string. Recommended if the desired volume source is not supported by --type . -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To add a new volume source emptyDir to the registry DeploymentConfig object: USD oc set volume dc/registry --add Tip You can alternatively apply the following YAML to add the volume: Example 7.1. Sample deployment config with an added volume kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP 1 Add the volume source emptyDir . To add volume v1 with secret secret1 for replication controller r1 and mount inside the containers at /data : USD oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data Tip You can alternatively apply the following YAML to add the volume: Example 7.2. Sample replication controller with added volume and secret kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data 1 Add the volume and secret. 2 Add the container mount path. To add existing persistent volume v1 with claim name pvc1 to deployment configuration dc.json on disk, mount the volume on container c1 at /data , and update the DeploymentConfig object on the server: USD oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim \ --claim-name=pvc1 --mount-path=/data --containers=c1 Tip You can alternatively apply the following YAML to add the volume: Example 7.3. Sample deployment config with persistent volume added kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data 1 Add the persistent volume claim named `pvc1. 2 Add the container mount path. To add a volume v1 based on Git repository https://github.com/namespace1/project1 with revision 5125c45f9f563 for all replication controllers: USD oc set volume rc --all --add --name=v1 \ --source='{"gitRepo": { "repository": "https://github.com/namespace1/project1", "revision": "5125c45f9f563" }}' 7.3.5. Updating volumes and volume mounts in a pod You can modify the volumes and volume mounts in a pod. Procedure Updating existing volumes using the --overwrite option: USD oc set volume <object_type>/<name> --add --overwrite [options] For example: To replace existing volume v1 for replication controller r1 with existing persistent volume claim pvc1 : USD oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 Tip You can alternatively apply the following YAML to replace the volume: Example 7.4. Sample replication controller with persistent volume claim named pvc1 kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data 1 Set persistent volume claim to pvc1 . To change the DeploymentConfig object d1 mount point to /opt for volume v1 : USD oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt Tip You can alternatively apply the following YAML to change the mount point: Example 7.5. Sample deployment config with mount point set to opt . kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt 1 Set the mount point to /opt . 7.3.6. Removing volumes and volume mounts from a pod You can remove a volume or volume mount from a pod. Procedure To remove a volume from pod templates: USD oc set volume <object_type>/<name> --remove [options] Table 7.3. Supported options for removing volumes Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' --confirm Indicate that you want to remove multiple volumes at once. -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To remove a volume v1 from the DeploymentConfig object d1 : USD oc set volume dc/d1 --remove --name=v1 To unmount volume v1 from container c1 for the DeploymentConfig object d1 and remove the volume v1 if it is not referenced by any containers on d1 : USD oc set volume dc/d1 --remove --name=v1 --containers=c1 To remove all volumes for replication controller r1 : USD oc set volume rc/r1 --remove --confirm 7.3.7. Configuring volumes for multiple uses in a pod You can configure a volume to allows you to share one volume for multiple uses in a single pod using the volumeMounts.subPath property to specify a subPath value inside a volume instead of the volume's root. Note You cannot add a subPath parameter to an existing scheduled pod. Procedure To view the list of files in the volume, run the oc rsh command: USD oc rsh <pod> Example output sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3 Specify the subPath : Example Pod spec with subPath parameter apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data 1 Databases are stored in the mysql folder. 2 HTML content is stored in the html folder. 7.4. Mapping volumes using projected volumes A projected volume maps several existing volume sources into the same directory. The following types of volume sources can be projected: Secrets Config Maps Downward API Note All sources are required to be in the same namespace as the pod. 7.4.1. Understanding projected volumes Projected volumes can map any combination of these volume sources into a single directory, allowing the user to: automatically populate a single volume with the keys from multiple secrets, config maps, and with downward API information, so that I can synthesize a single directory with various sources of information; populate a single volume with the keys from multiple secrets, config maps, and with downward API information, explicitly specifying paths for each item, so that I can have full control over the contents of that volume. Important When the RunAsUser permission is set in the security context of a Linux-based pod, the projected files have the correct permissions set, including container user ownership. However, when the Windows equivalent RunAsUsername permission is set in a Windows pod, the kubelet is unable to correctly set ownership on the files in the projected volume. Therefore, the RunAsUsername permission set in the security context of a Windows pod is not honored for Windows projected volumes running in OpenShift Container Platform. The following general scenarios show how you can use projected volumes. Config map, secrets, Downward API. Projected volumes allow you to deploy containers with configuration data that includes passwords. An application using these resources could be deploying Red Hat OpenStack Platform (RHOSP) on Kubernetes. The configuration data might have to be assembled differently depending on if the services are going to be used for production or for testing. If a pod is labeled with production or testing, the downward API selector metadata.labels can be used to produce the correct RHOSP configs. Config map + secrets. Projected volumes allow you to deploy containers involving configuration data and passwords. For example, you might execute a config map with some sensitive encrypted tasks that are decrypted using a vault password file. ConfigMap + Downward API. Projected volumes allow you to generate a config including the pod name (available via the metadata.name selector). This application can then pass the pod name along with requests to easily determine the source without using IP tracking. Secrets + Downward API. Projected volumes allow you to use a secret as a public key to encrypt the namespace of the pod (available via the metadata.namespace selector). This example allows the Operator to use the application to deliver the namespace information securely without using an encrypted transport. 7.4.1.1. Example Pod specs The following are examples of Pod specs for creating projected volumes. Pod with a secret, a Downward API, and a config map apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: "/projected-volume" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "cpu_limit" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11 1 Add a volumeMounts section for each container that needs the secret. 2 Specify a path to an unused directory where the secret will appear. 3 Set readOnly to true . 4 Add a volumes block to list each projected volume source. 5 Specify any name for the volume. 6 Set the execute permission on the files. 7 Add a secret. Enter the name of the secret object. Each secret you want to use must be listed. 8 Specify the path to the secrets file under the mountPath . Here, the secrets file is in /projected-volume/my-group/my-username . 9 Add a Downward API source. 10 Add a ConfigMap source. 11 Set the mode for the specific projection Note If there are multiple containers in the pod, each container needs a volumeMounts section, but only one volumes section is needed. Pod with multiple secrets with a non-default permission mode set apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511 Note The defaultMode can only be specified at the projected level and not for each volume source. However, as illustrated above, you can explicitly set the mode for each individual projection. 7.4.1.2. Pathing Considerations Collisions Between Keys when Configured Paths are Identical If you configure any keys with the same path, the pod spec will not be accepted as valid. In the following example, the specified path for mysecret and myconfigmap are the same: apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data Consider the following situations related to the volume file paths. Collisions Between Keys without Configured Paths The only run-time validation that can occur is when all the paths are known at pod creation, similar to the above scenario. Otherwise, when a conflict occurs the most recent specified resource will overwrite anything preceding it (this is true for resources that are updated after pod creation as well). Collisions when One Path is Explicit and the Other is Automatically Projected In the event that there is a collision due to a user specified path matching data that is automatically projected, the latter resource will overwrite anything preceding it as before 7.4.2. Configuring a Projected Volume for a Pod When creating projected volumes, consider the volume file path situations described in Understanding projected volumes . The following example shows how to use a projected volume to mount an existing secret volume source. The steps can be used to create a user name and password secrets from local files. You then create a pod that runs one container, using a projected volume to mount the secrets into the same shared directory. The user name and password values can be any valid string that is base64 encoded. The following example shows admin in base64: USD echo -n "admin" | base64 Example output YWRtaW4= The following example shows the password 1f2d1e2e67df in base64: USD echo -n "1f2d1e2e67df" | base64 Example output MWYyZDFlMmU2N2Rm Procedure To use a projected volume to mount an existing secret volume source. Create the secret: Create a YAML file similar to the following, replacing the password and user information as appropriate: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= Use the following command to create the secret: USD oc create -f <secrets-filename> For example: USD oc create -f secret.yaml Example output secret "mysecret" created You can check that the secret was created using the following commands: USD oc get secret <secret-name> For example: USD oc get secret mysecret Example output NAME TYPE DATA AGE mysecret Opaque 2 17h USD oc get secret <secret-name> -o yaml For example: USD oc get secret mysecret -o yaml apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: "2107" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque Create a pod with a projected volume. Create a YAML file similar to the following, including a volumes section: kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - "86400" volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1 1 The name of the secret you created. Create the pod from the configuration file: USD oc create -f <your_yaml_file>.yaml For example: USD oc create -f secret-pod.yaml Example output pod "test-projected-volume" created Verify that the pod container is running, and then watch for changes to the pod: USD oc get pod <name> For example: USD oc get pod test-projected-volume The output should appear similar to the following: Example output NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s In another terminal, use the oc exec command to open a shell to the running container: USD oc exec -it <pod> <command> For example: USD oc exec -it test-projected-volume -- /bin/sh In your shell, verify that the projected-volumes directory contains your projected sources: / # ls Example output bin home root tmp dev proc run usr etc projected-volume sys var 7.5. Allowing containers to consume API objects The Downward API is a mechanism that allows containers to consume information about API objects without coupling to OpenShift Container Platform. Such information includes the pod's name, namespace, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. 7.5.1. Expose pod information to Containers using the Downward API The Downward API contains such information as the pod's name, project, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. Fields within the pod are selected using the FieldRef API type. FieldRef has two fields: Field Description fieldPath The path of the field to select, relative to the pod. apiVersion The API version to interpret the fieldPath selector within. Currently, the valid selectors in the v1 API include: Selector Description metadata.name The pod's name. This is supported in both environment variables and volumes. metadata.namespace The pod's namespace.This is supported in both environment variables and volumes. metadata.labels The pod's labels. This is only supported in volumes and not in environment variables. metadata.annotations The pod's annotations. This is only supported in volumes and not in environment variables. status.podIP The pod's IP. This is only supported in environment variables and not volumes. The apiVersion field, if not specified, defaults to the API version of the enclosing pod template. 7.5.2. Understanding how to consume container values using the downward API You containers can consume API values using environment variables or a volume plugin. Depending on the method you choose, containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Annotations and labels are available using only a volume plugin. 7.5.2.1. Consuming container values using environment variables When using a container's environment variables, use the EnvVar type's valueFrom field (of type EnvVarSource ) to specify that the variable's value should come from a FieldRef source instead of the literal value specified by the value field. Only constant attributes of the pod can be consumed this way, as environment variables cannot be updated once a process is started in a way that allows the process to be notified that the value of a variable has changed. The fields supported using environment variables are: Pod name Pod project/namespace Procedure Create a new pod spec that contains the environment variables you want the container to consume: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_POD_NAME and MY_POD_NAMESPACE values: USD oc logs -p dapi-env-test-pod 7.5.2.2. Consuming container values using a volume plugin You containers can consume API values using a volume plugin. Containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Procedure To use the volume plugin: Create a new pod spec that contains the environment variables you want the container to consume: Create a volume-pod.yaml file similar to the following: kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: "345" annotation2: "456" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: ["sh", "-c", "cat /tmp/etc/pod_labels /tmp/etc/pod_annotations"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml Verification Check the container's logs and verify the presence of the configured fields: USD oc logs -p dapi-volume-test-pod Example output cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api 7.5.3. Understanding how to consume container resources using the Downward API When creating pods, you can use the Downward API to inject information about computing resource requests and limits so that image and application authors can correctly create an image for specific environments. You can do this using environment variable or a volume plugin. 7.5.3.1. Consuming container resources using environment variables When creating pods, you can use the Downward API to inject information about computing resource requests and limits using environment variables. When creating the pod configuration, specify environment variables that correspond to the contents of the resources field in the spec.container field. Note If the resource limits are not included in the container configuration, the downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ "/bin/sh", "-c", "env" ] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml 7.5.3.2. Consuming container resources using a volume plugin When creating pods, you can use the Downward API to inject information about computing resource requests and limits using a volume plugin. When creating the pod configuration, use the spec.volumes.downwardAPI.items field to describe the desired resources that correspond to the spec.resources field. Note If the resource limits are not included in the container configuration, the Downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: ["sh", "-c", "while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done"] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: "cpu_limit" resourceFieldRef: containerName: client-container resource: limits.cpu - path: "cpu_request" resourceFieldRef: containerName: client-container resource: requests.cpu - path: "mem_limit" resourceFieldRef: containerName: client-container resource: limits.memory - path: "mem_request" resourceFieldRef: containerName: client-container resource: requests.memory # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml 7.5.4. Consuming secrets using the Downward API When creating pods, you can use the downward API to inject secrets so image and application authors can create an image for specific environments. Procedure Create a secret to inject: Create a secret.yaml file similar to the following: apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth Create the secret object from the secret.yaml file: USD oc create -f secret.yaml Create a pod that references the username field from the above Secret object: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_SECRET_USERNAME value: USD oc logs -p dapi-env-test-pod 7.5.5. Consuming configuration maps using the Downward API When creating pods, you can use the Downward API to inject configuration map values so image and application authors can create an image for specific environments. Procedure Create a config map with the values to inject: Create a configmap.yaml file similar to the following: apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue Create the config map from the configmap.yaml file: USD oc create -f configmap.yaml Create a pod that references the above config map: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_CONFIGMAP_VALUE value: USD oc logs -p dapi-env-test-pod 7.5.6. Referencing environment variables When creating pods, you can reference the value of a previously defined environment variable by using the USD() syntax. If the environment variable reference can not be resolved, the value will be left as the provided string. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_ENV_VAR_REF_ENV value: USD oc logs -p dapi-env-test-pod 7.5.7. Escaping environment variable references When creating a pod, you can escape an environment variable reference by using a double dollar sign. The value will then be set to a single dollar sign version of the provided value. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_NEW_ENV value: USD oc logs -p dapi-env-test-pod 7.6. Copying files to or from an OpenShift Container Platform container You can use the CLI to copy local files to or from a remote directory in a container using the rsync command. 7.6.1. Understanding how to copy files The oc rsync command, or remote sync, is a useful tool for copying database archives to and from your pods for backup and restore purposes. You can also use oc rsync to copy source code changes into a running pod for development debugging, when the running pod supports hot reload of source files. USD oc rsync <source> <destination> [-c <container>] 7.6.1.1. Requirements Specifying the Copy Source The source argument of the oc rsync command must point to either a local directory or a pod directory. Individual files are not supported. When specifying a pod directory the directory name must be prefixed with the pod name: <pod name>:<dir> If the directory name ends in a path separator ( / ), only the contents of the directory are copied to the destination. Otherwise, the directory and its contents are copied to the destination. Specifying the Copy Destination The destination argument of the oc rsync command must point to a directory. If the directory does not exist, but rsync is used for copy, the directory is created for you. Deleting Files at the Destination The --delete flag may be used to delete any files in the remote directory that are not in the local directory. Continuous Syncing on File Change Using the --watch option causes the command to monitor the source path for any file system changes, and synchronizes changes when they occur. With this argument, the command runs forever. Synchronization occurs after short quiet periods to ensure a rapidly changing file system does not result in continuous synchronization calls. When using the --watch option, the behavior is effectively the same as manually invoking oc rsync repeatedly, including any arguments normally passed to oc rsync . Therefore, you can control the behavior via the same flags used with manual invocations of oc rsync , such as --delete . 7.6.2. Copying files to and from containers Support for copying local files to or from a container is built into the CLI. Prerequisites When working with oc rsync , note the following: rsync must be installed. The oc rsync command uses the local rsync tool, if present on the client machine and the remote container. If rsync is not found locally or in the remote container, a tar archive is created locally and sent to the container where the tar utility is used to extract the files. If tar is not available in the remote container, the copy will fail. The tar copy method does not provide the same functionality as oc rsync . For example, oc rsync creates the destination directory if it does not exist and only sends files that are different between the source and the destination. Note In Windows, the cwRsync client should be installed and added to the PATH for use with the oc rsync command. Procedure To copy a local directory to a pod directory: USD oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name> For example: USD oc rsync /home/user/source devpod1234:/src -c user-container To copy a pod directory to a local directory: USD oc rsync devpod1234:/src /home/user/source Example output USD oc rsync devpod1234:/src/status.txt /home/user/ 7.6.3. Using advanced Rsync features The oc rsync command exposes fewer command line options than standard rsync . In the case that you want to use a standard rsync command line option that is not available in oc rsync , for example the --exclude-from=FILE option, it might be possible to use standard rsync 's --rsh ( -e ) option or RSYNC_RSH environment variable as a workaround, as follows: USD rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> or: Export the RSYNC_RSH variable: USD export RSYNC_RSH='oc rsh' Then, run the rsync command: USD rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> Both of the above examples configure standard rsync to use oc rsh as its remote shell program to enable it to connect to the remote pod, and are an alternative to running oc rsync . 7.7. Executing remote commands in an OpenShift Container Platform container You can use the CLI to execute remote commands in an OpenShift Container Platform container. 7.7.1. Executing remote commands in containers Support for remote container command execution is built into the CLI. Procedure To run a command in a container: USD oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>] For example: USD oc exec mypod date Example output Thu Apr 9 02:21:53 UTC 2015 Important For security purposes , the oc exec command does not work when accessing privileged containers except when the command is executed by a cluster-admin user. 7.7.2. Protocol for initiating a remote command from a client Clients initiate the execution of a remote command in a container by issuing a request to the Kubernetes API server: /proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command> In the above URL: <node_name> is the FQDN of the node. <namespace> is the project of the target pod. <pod> is the name of the target pod. <container> is the name of the target container. <command> is the desired command to be executed. For example: /proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date Additionally, the client can add parameters to the request to indicate if: the client should send input to the remote container's command (stdin). the client's terminal is a TTY. the remote container's command should send output from stdout to the client. the remote container's command should send output from stderr to the client. After sending an exec request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses HTTP/2 . The client creates one stream each for stdin, stdout, and stderr. To distinguish among the streams, the client sets the streamType header on the stream to one of stdin , stdout , or stderr . The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the remote command execution request. 7.8. Using port forwarding to access applications in a container OpenShift Container Platform supports port forwarding to pods. 7.8.1. Understanding port forwarding You can use the CLI to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod. Support for port forwarding is built into the CLI: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] The CLI listens on each local port specified by the user, forwarding using the protocol described below. Ports may be specified using the following formats: 5000 The client listens on port 5000 locally and forwards to 5000 in the pod. 6000:5000 The client listens on port 6000 locally and forwards to 5000 in the pod. :5000 or 0:5000 The client selects a free local port and forwards to 5000 in the pod. OpenShift Container Platform handles port-forward requests from clients. Upon receiving a request, OpenShift Container Platform upgrades the response and waits for the client to create port-forwarding streams. When OpenShift Container Platform receives a new stream, it copies data between the stream and the pod's port. Architecturally, there are options for forwarding to a pod's port. The supported OpenShift Container Platform implementation invokes nsenter directly on the node host to enter the pod's network namespace, then invokes socat to copy data between the stream and the pod's port. However, a custom implementation could include running a helper pod that then runs nsenter and socat , so that those binaries are not required to be installed on the host. 7.8.2. Using port forwarding You can use the CLI to port-forward one or more local ports to a pod. Procedure Use the following command to listen on the specified port in a pod: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] For example: Use the following command to listen on ports 5000 and 6000 locally and forward data to and from ports 5000 and 6000 in the pod: USD oc port-forward <pod> 5000 6000 Example output Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000 Use the following command to listen on port 8888 locally and forward to 5000 in the pod: USD oc port-forward <pod> 8888:5000 Example output Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000 Use the following command to listen on a free port locally and forward to 5000 in the pod: USD oc port-forward <pod> :5000 Example output Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000 Or: USD oc port-forward <pod> 0:5000 7.8.3. Protocol for initiating port forwarding from a client Clients initiate port forwarding to a pod by issuing a request to the Kubernetes API server: In the above URL: <node_name> is the FQDN of the node. <namespace> is the namespace of the target pod. <pod> is the name of the target pod. For example: After sending a port forward request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses Hyptertext Transfer Protocol Version 2 (HTTP/2) . The client creates a stream with the port header containing the target port in the pod. All data written to the stream is delivered via the kubelet to the target pod and port. Similarly, all data sent from the pod for that forwarded connection is delivered back to the same stream in the client. The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the port forwarding request. 7.9. Using sysctls in containers Sysctl settings are exposed through Kubernetes, allowing users to modify certain kernel parameters at runtime. Only sysctls that are namespaced can be set independently on pods. If a sysctl is not namespaced, called node-level , you must use another method of setting the sysctl, such as by using the Node Tuning Operator. Network sysctls are a special category of sysctl. Network sysctls include: System-wide sysctls, for example net.ipv4.ip_local_port_range , that are valid for all networking. You can set these independently for each pod on a node. Interface-specific sysctls, for example net.ipv4.conf.IFNAME.accept_local , that only apply to a specific additional network interface for a given pod. You can set these independently for each additional network configuration. You set these by using a configuration in the tuning-cni after the network interfaces are created. Moreover, only those sysctls considered safe are whitelisted by default; you can manually enable other unsafe sysctls on the node to be available to the user. If you are setting the sysctl and it is node-level, you can find information on this procedure in the section Using the Node Tuning Operator . 7.9.1. About sysctls In Linux, the sysctl interface allows an administrator to modify kernel parameters at runtime. Parameters are available from the /proc/sys/ virtual process file system. The parameters cover various subsystems, such as: kernel (common prefix: kernel. ) networking (common prefix: net. ) virtual memory (common prefix: vm. ) MDADM (common prefix: dev. ) More subsystems are described in Kernel documentation . To get a list of all parameters, run: USD sudo sysctl -a 7.9.2. Namespaced and node-level sysctls A number of sysctls are namespaced in the Linux kernels. This means that you can set them independently for each pod on a node. Being namespaced is a requirement for sysctls to be accessible in a pod context within Kubernetes. The following sysctls are known to be namespaced: kernel.shm* kernel.msg* kernel.sem fs.mqueue.* Additionally, most of the sysctls in the net.* group are known to be namespaced. Their namespace adoption differs based on the kernel version and distributor. Sysctls that are not namespaced are called node-level and must be set manually by the cluster administrator, either by means of the underlying Linux distribution of the nodes, such as by modifying the /etc/sysctls.conf file, or by using a daemon set with privileged containers. You can use the Node Tuning Operator to set node-level sysctls. Note Consider marking nodes with special sysctls as tainted. Only schedule pods onto them that need those sysctl settings. Use the taints and toleration feature to mark the nodes. 7.9.3. Safe and unsafe sysctls Sysctls are grouped into safe and unsafe sysctls. For system-wide sysctls to be considered safe, they must be namespaced. A namespaced sysctl ensures there is isolation between namespaces and therefore pods. If you set a sysctl for one pod it must not add any of the following: Influence any other pod on the node Harm the node health Gain CPU or memory resources outside of the resource limits of a pod Note Being namespaced alone is not sufficient for the sysctl to be considered safe. Any sysctl that is not added to the allowed list on OpenShift Container Platform is considered unsafe for OpenShift Container Platform. Unsafe sysctls are not allowed by default. For system-wide sysctls the cluster administrator must manually enable them on a per-node basis. Pods with disabled unsafe sysctls are scheduled but do not launch. Note You cannot manually enable interface-specific unsafe sysctls. OpenShift Container Platform adds the following system-wide and interface-specific safe sysctls to an allowed safe list: Table 7.4. System-wide safe sysctls sysctl Description kernel.shm_rmid_forced When set to 1 , all shared memory objects in current IPC namespace are automatically forced to use IPC_RMID. For more information, see shm_rmid_forced . net.ipv4.ip_local_port_range Defines the local port range that is used by TCP and UDP to choose the local port. The first number is the first port number, and the second number is the last local port number. If possible, it is better if these numbers have different parity (one even and one odd value). They must be greater than or equal to ip_unprivileged_port_start . The default values are 32768 and 60999 respectively. For more information, see ip_local_port_range . net.ipv4.tcp_syncookies When net.ipv4.tcp_syncookies is set, the kernel handles TCP SYN packets normally until the half-open connection queue is full, at which time, the SYN cookie functionality kicks in. This functionality allows the system to keep accepting valid connections, even if under a denial-of-service attack. For more information, see tcp_syncookies . net.ipv4.ping_group_range This restricts ICMP_PROTO datagram sockets to users in the group range. The default is 1 0 , meaning that nobody, not even root, can create ping sockets. For more information, see ping_group_range . net.ipv4.ip_unprivileged_port_start This defines the first unprivileged port in the network namespace. To disable all privileged ports, set this to 0 . Privileged ports must not overlap with the ip_local_port_range . For more information, see ip_unprivileged_port_start . net.ipv4.ip_local_reserved_ports Specify a range of comma-separated local ports that you want to reserve for applications or services. Table 7.5. Interface-specific safe sysctls sysctl Description net.ipv4.conf.IFNAME.accept_redirects Accept IPv4 ICMP redirect messages. net.ipv4.conf.IFNAME.accept_source_route Accept IPv4 packets with strict source route (SRR) option. net.ipv4.conf.IFNAME.arp_accept Define behavior for gratuitous ARP frames with an IPv4 address that is not already present in the ARP table: 0 - Do not create new entries in the ARP table. 1 - Create new entries in the ARP table. net.ipv4.conf.IFNAME.arp_notify Define mode for notification of IPv4 address and device changes. net.ipv4.conf.IFNAME.disable_policy Disable IPSEC policy (SPD) for this IPv4 interface. net.ipv4.conf.IFNAME.secure_redirects Accept ICMP redirect messages only to gateways listed in the interface's current gateway list. net.ipv4.conf.IFNAME.send_redirects Send redirects is enabled only if the node acts as a router. That is, a host should not send an ICMP redirect message. It is used by routers to notify the host about a better routing path that is available for a particular destination. net.ipv6.conf.IFNAME.accept_ra Accept IPv6 Router advertisements; autoconfigure using them. It also determines whether or not to transmit router solicitations. Router solicitations are transmitted only if the functional setting is to accept router advertisements. net.ipv6.conf.IFNAME.accept_redirects Accept IPv6 ICMP redirect messages. net.ipv6.conf.IFNAME.accept_source_route Accept IPv6 packets with SRR option. net.ipv6.conf.IFNAME.arp_accept Define behavior for gratuitous ARP frames with an IPv6 address that is not already present in the ARP table: 0 - Do not create new entries in the ARP table. 1 - Create new entries in the ARP table. net.ipv6.conf.IFNAME.arp_notify Define mode for notification of IPv6 address and device changes. net.ipv6.neigh.IFNAME.base_reachable_time_ms This parameter controls the hardware address to IP mapping lifetime in the neighbour table for IPv6. net.ipv6.neigh.IFNAME.retrans_time_ms Set the retransmit timer for neighbor discovery messages. Note When setting these values using the tuning CNI plugin, use the value IFNAME literally. The interface name is represented by the IFNAME token, and is replaced with the actual name of the interface at runtime. 7.9.4. Updating the interface-specific safe sysctls list OpenShift Container Platform includes a predefined list of safe interface-specific sysctls . You can modify this list by updating the cni-sysctl-allowlist in the openshift-multus namespace. Important The support for updating the interface-specific safe sysctls list is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Follow this procedure to modify the predefined list of safe sysctls . This procedure describes how to extend the default allow list. Procedure View the existing predefined list by running the following command: USD oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml Expected output apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.15.0-0.nightly-2022-11-16-003434 creationTimestamp: "2022-11-17T14:09:27Z" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: "2422" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3 Edit the list by using the following command: USD oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml For example, to allow you to be able to implement stricter reverse path forwarding you need to add ^net.ipv4.conf.IFNAME.rp_filterUSD and ^net.ipv6.conf.IFNAME.rp_filterUSD to the list as shown here: # Please edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD Save the changes to the file and exit. Note The removal of sysctls is also supported. Edit the file, remove the sysctl or sysctls then save the changes and exit. Verification Follow this procedure to enforce stricter reverse path forwarding for IPv4. For more information on reverse path forwarding see Reverse Path Forwarding . Create a network attachment definition, such as reverse-path-fwd-example.yaml , with the following content: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ "cniVersion": "0.4.0", "name": "tuningnad", "plugins": [{ "type": "bridge" }, { "type": "tuning", "sysctl": { "net.ipv4.conf.IFNAME.rp_filter": "1" } } ] }' Apply the yaml by running the following command: USD oc apply -f reverse-path-fwd-example.yaml Example output networkattachmentdefinition.k8.cni.cncf.io/tuningnad created Create a pod such as examplepod.yaml using the following YAML: apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL 1 Specify the name of the configured NetworkAttachmentDefinition . Apply the yaml by running the following command: USD oc apply -f examplepod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s Log in to the pod by running the following command: USD oc rsh example Verify the value of the configured sysctl flag. For example, find the value net.ipv4.conf.net1.rp_filter by running the following command: sh-4.4# sysctl net.ipv4.conf.net1.rp_filter Expected output net.ipv4.conf.net1.rp_filter = 1 Additional resources Linux networking documentation 7.9.5. Starting a pod with safe sysctls You can set sysctls on pods using the pod's securityContext . The securityContext applies to all containers in the same pod. Safe sysctls are allowed by default. This example uses the pod securityContext to set the following safe sysctls: kernel.shm_rmid_forced net.ipv4.ip_local_port_range net.ipv4.tcp_syncookies net.ipv4.ping_group_range Warning To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects. Use this procedure to start a pod with the configured sysctl settings. Note In most cases you modify an existing pod definition and add the securityContext spec. Procedure Create a YAML file sysctl_pod.yaml that defines an example pod and add the securityContext spec, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: ["ALL"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "1" - name: net.ipv4.ip_local_port_range value: "32770 60666" - name: net.ipv4.tcp_syncookies value: "0" - name: net.ipv4.ping_group_range value: "0 200000000" 1 runAsUser controls which user ID the container is run with. 2 runAsGroup controls which primary group ID the containers is run with. 3 allowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, it defaults to true. This boolean directly controls whether the no_new_privs flag gets set on the container process. 4 capabilities permit privileged actions without giving full root access. This policy ensures all capabilities are dropped from the pod. 5 runAsNonRoot: true requires that the container will run with a user with any UID other than 0. 6 RuntimeDefault enables the default seccomp profile for a pod or container workload. Create the pod by running the following command: USD oc apply -f sysctl_pod.yaml Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s Log in to the pod by running the following command: USD oc rsh sysctl-example Verify the values of the configured sysctl flags. For example, find the value kernel.shm_rmid_forced by running the following command: sh-4.4# sysctl kernel.shm_rmid_forced Expected output kernel.shm_rmid_forced = 1 7.9.6. Starting a pod with unsafe sysctls A pod with unsafe sysctls fails to launch on any node unless the cluster administrator explicitly enables unsafe sysctls for that node. As with node-level sysctls, use the taints and toleration feature or labels on nodes to schedule those pods onto the right nodes. The following example uses the pod securityContext to set a safe sysctl kernel.shm_rmid_forced and two unsafe sysctls, net.core.somaxconn and kernel.msgmax . There is no distinction between safe and unsafe sysctls in the specification. Warning To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects. The following example illustrates what happens when you add safe and unsafe sysctls to a pod specification: Procedure Create a YAML file sysctl-example-unsafe.yaml that defines an example pod and add the securityContext specification, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536" Create the pod using the following command: USD oc apply -f sysctl-example-unsafe.yaml Verify that the pod is scheduled but does not deploy because unsafe sysctls are not allowed for the node using the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s 7.9.7. Enabling unsafe sysctls A cluster administrator can allow certain unsafe sysctls for very special situations such as high performance or real-time application tuning. If you want to use unsafe sysctls, a cluster administrator must enable them individually for a specific type of node. The sysctls must be namespaced. You can further control which sysctls are set in pods by specifying lists of sysctls or sysctl patterns in the allowedUnsafeSysctls field of the Security Context Constraints. The allowedUnsafeSysctls option controls specific needs such as high performance or real-time application tuning. Warning Due to their nature of being unsafe, the use of unsafe sysctls is at-your-own-risk and can lead to severe problems, such as improper behavior of containers, resource shortage, or breaking a node. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to decide how to label your machine config by running the following command: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m Add a label to the machine config pool where the containers with the unsafe sysctls will run by running the following command: USD oc label machineconfigpool worker custom-kubelet=sysctl Create a YAML file set-sysctl-worker.yaml that defines a KubeletConfig custom resource (CR): apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - "kernel.msg*" - "net.core.somaxconn" 1 Specify the label from the machine config pool. 2 List the unsafe sysctls you want to allow. Create the object by running the following command: USD oc apply -f set-sysctl-worker.yaml Wait for the Machine Config Operator to generate the new rendered configuration and apply it to the machines by running the following command: USD oc get machineconfigpool worker -w After some minutes the UPDATING status changes from True to False: NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m Create a YAML file sysctl-example-safe-unsafe.yaml that defines an example pod and add the securityContext spec, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: ["bin/bash", "-c", "sleep INF"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: ["ALL"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536" Create the pod by running the following command: USD oc apply -f sysctl-example-safe-unsafe.yaml Expected output Warning: would violate PodSecurity "restricted:latest": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created Verify that the pod is created by running the following command: USD oc get pod Example output NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s Log in to the pod by running the following command: USD oc rsh sysctl-example-safe-unsafe Verify the values of the configured sysctl flags. For example, find the value net.core.somaxconn by running the following command: sh-4.4# sysctl net.core.somaxconn Expected output net.core.somaxconn = 1024 The unsafe sysctl is now allowed and the value is set as defined in the securityContext spec of the updated pod specification. 7.9.8. Additional resources Configuring system controls by using the tuning CNI Using the Node Tuning Operator 7.10. Accessing faster builds with /dev/fuse You can configure your pods with the /dev/fuse device to access faster builds. 7.10.1. Configuring /dev/fuse on unprivileged pods As an alternative to the virtual filesystem, you can configure the /dev/fuse device to the io.kubernetes.cri-o.Devices annotation to access faster builds within unprivileged pods. Using /dev/fuse is secure, efficient, and scalable, and allows unprivileged users to mount an overlay filesystem as if the unprivileged pod was privileged. Procedure Create the pod. USD oc exec -ti no-priv -- /bin/bash USD cat >> Dockerfile <<EOF FROM registry.access.redhat.com/ubi9 EOF USD podman build . Implement /dev/fuse by adding the /dev/fuse device to the io.kubernetes.cri-o.Devices annotation. io.kubernetes.cri-o.Devices: "/dev/fuse" For example: apiVersion: v1 kind: Pod metadata: name: podman-pod annotations: io.kubernetes.cri-o.Devices: "/dev/fuse" Configure the /dev/fuse device in your pod specifications. spec: containers: - name: podman-container image: quay.io/podman/stable args: - sleep - "1000000" securityContext: runAsUser: 1000
[ "USD(nproc) X 1/2 MiB", "for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1", "curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'", "apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f myapp.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s", "kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f myservice.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s", "kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377", "oc create -f mydb.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m", "oc set volume <object_selection> <operation> <mandatory_parameters> <options>", "oc set volume <object_type>/<name> [options]", "oc set volume pod/p1", "oc set volume dc --all --name=v1", "oc set volume <object_type>/<name> --add [options]", "oc set volume dc/registry --add", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP", "oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'", "oc set volume <object_type>/<name> --add --overwrite [options]", "oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data", "oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt", "oc set volume <object_type>/<name> --remove [options]", "oc set volume dc/d1 --remove --name=v1", "oc set volume dc/d1 --remove --name=v1 --containers=c1", "oc set volume rc/r1 --remove --confirm", "oc rsh <pod>", "sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3", "apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data", "echo -n \"admin\" | base64", "YWRtaW4=", "echo -n \"1f2d1e2e67df\" | base64", "MWYyZDFlMmU2N2Rm", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=", "oc create -f <secrets-filename>", "oc create -f secret.yaml", "secret \"mysecret\" created", "oc get secret <secret-name>", "oc get secret mysecret", "NAME TYPE DATA AGE mysecret Opaque 2 17h", "oc get secret <secret-name> -o yaml", "oc get secret mysecret -o yaml", "apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque", "kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1", "oc create -f <your_yaml_file>.yaml", "oc create -f secret-pod.yaml", "pod \"test-projected-volume\" created", "oc get pod <name>", "oc get pod test-projected-volume", "NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s", "oc exec -it <pod> <command>", "oc exec -it test-projected-volume -- /bin/sh", "/ # ls", "bin home root tmp dev proc run usr etc projected-volume sys var", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never", "oc create -f volume-pod.yaml", "oc logs -p dapi-volume-test-pod", "cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory", "oc create -f pod.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory", "oc create -f volume-pod.yaml", "apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth", "oc create -f secret.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue", "oc create -f configmap.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "oc rsync <source> <destination> [-c <container>]", "<pod name>:<dir>", "oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>", "oc rsync /home/user/source devpod1234:/src -c user-container", "oc rsync devpod1234:/src /home/user/source", "oc rsync devpod1234:/src/status.txt /home/user/", "rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "export RSYNC_RSH='oc rsh'", "rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]", "oc exec mypod date", "Thu Apr 9 02:21:53 UTC 2015", "/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>", "/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> 5000 6000", "Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000", "oc port-forward <pod> 8888:5000", "Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000", "oc port-forward <pod> :5000", "Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000", "oc port-forward <pod> 0:5000", "/proxy/nodes/<node_name>/portForward/<namespace>/<pod>", "/proxy/nodes/node123.openshift.com/portForward/myns/mypod", "sudo sysctl -a", "oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml", "apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.15.0-0.nightly-2022-11-16-003434 creationTimestamp: \"2022-11-17T14:09:27Z\" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: \"2422\" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3", "oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml", "Please edit the object below. Lines beginning with a '#' will be ignored, and an empty file will abort the edit. If an error occurs while saving this file will be reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.rp_filter\": \"1\" } } ] }'", "oc apply -f reverse-path-fwd-example.yaml", "networkattachmentdefinition.k8.cni.cncf.io/tuningnad created", "apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "oc apply -f examplepod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s", "oc rsh example", "sh-4.4# sysctl net.ipv4.conf.net1.rp_filter", "net.ipv4.conf.net1.rp_filter = 1", "apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: [\"ALL\"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"1\" - name: net.ipv4.ip_local_port_range value: \"32770 60666\" - name: net.ipv4.tcp_syncookies value: \"0\" - name: net.ipv4.ping_group_range value: \"0 200000000\"", "oc apply -f sysctl_pod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s", "oc rsh sysctl-example", "sh-4.4# sysctl kernel.shm_rmid_forced", "kernel.shm_rmid_forced = 1", "apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"", "oc apply -f sysctl-example-unsafe.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m", "oc label machineconfigpool worker custom-kubelet=sysctl", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - \"kernel.msg*\" - \"net.core.somaxconn\"", "oc apply -f set-sysctl-worker.yaml", "oc get machineconfigpool worker -w", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m", "apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"", "oc apply -f sysctl-example-safe-unsafe.yaml", "Warning: would violate PodSecurity \"restricted:latest\": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s", "oc rsh sysctl-example-safe-unsafe", "sh-4.4# sysctl net.core.somaxconn", "net.core.somaxconn = 1024", "oc exec -ti no-priv -- /bin/bash", "cat >> Dockerfile <<EOF FROM registry.access.redhat.com/ubi9 EOF", "podman build .", "io.kubernetes.cri-o.Devices: \"/dev/fuse\"", "apiVersion: v1 kind: Pod metadata: name: podman-pod annotations: io.kubernetes.cri-o.Devices: \"/dev/fuse\"", "spec: containers: - name: podman-container image: quay.io/podman/stable args: - sleep - \"1000000\" securityContext: runAsUser: 1000" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/nodes/working-with-containers
Chapter 8. Hardware Enablement
Chapter 8. Hardware Enablement Support of Future Intel SOC Processors Device support is enabled in the operating system for future Intel System-on-Chip (SOC) processors. These include Dual Atom processors, memory controller, SATA, Universal Asynchronous Receiver/Transmitter, System Management Bus (SMBUS), USB and Intel Legacy Block (ILB - lpc, timers, SMBUS (i2c_801 module)). Support of 12Gbps LSI SAS Devices The mpt3sas driver adds support for 12Gbps SAS devices from LSI in Red Hat Enterprise Linux. Support of Dynamic Hardware Partitioning and System Board Slot Recognition The dynamic hardware partitioning and system board slot recognition features alert high-level system middleware or applications for reconfiguration and allow users to grow the system to support additional workloads without reboot. Support for future Intel 2D and 3D Graphics Support for future Intel 2D and 3D graphics has been added to allow systems using future Intel processors to be certified through the Red Hat Hardware Certification program. Frequency Sensitivity Feedback Monitor Frequency sensitivity feedback monitor provides the operating system with better information so that it can make better frequency change decisions while saving power. ECC Memory Support The Error-correcting code (ECC) memory has been enabled for a future generation of AMD processors. This feature provides the ability to check for performance and errors by accessing ECC memory related counters and status bits. Support for AMD Systems with More Than 1TB Memory The kernel now supports memory configurations with more than 1TB of RAM on AMD systems.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_release_notes/bh-chap-hardware-enablement
Chapter 6. Understanding and creating service accounts
Chapter 6. Understanding and creating service accounts 6.1. Service accounts overview A service account is an Red Hat OpenShift Service on AWS account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the Red Hat OpenShift Service on AWS CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. 6.1.1. Automatically generated image pull secrets By default, Red Hat OpenShift Service on AWS creates an image pull secret for each service account. Note Prior to Red Hat OpenShift Service on AWS 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with Red Hat OpenShift Service on AWS 4.16, this service account API token secret is no longer created. After upgrading to 4, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 6.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none> 6.3. Granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>", "oc policy add-role-to-user view system:serviceaccount:top-secret:robot", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret", "oc policy add-role-to-user <role_name> -z <service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>", "oc policy add-role-to-group view system:serviceaccounts -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts", "oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/understanding-and-creating-service-accounts
Chapter 9. Customizing Red Hat Quay on OpenShift Container Platform
Chapter 9. Customizing Red Hat Quay on OpenShift Container Platform After deployment, you can customize the Red Hat Quay application by editing the Red Hat Quay configuration bundle secret spec.configBundleSecret . You can also change the managed status of components and configure resource requests for some components in the spec.components object of the QuayRegistry resource. 9.1. Editing the config bundle secret in the OpenShift Container Platform console Use the following procedure to edit the config bundle secret in the OpenShift Container Platform console. Procedure On the Red Hat Quay Registry overview screen, click the link for the Config Bundle Secret . To edit the secret, click Actions Edit Secret . Modify the configuration and save the changes. Monitor the deployment to ensure successful completion and that the configuration changes have taken effect. 9.2. Determining QuayRegistry endpoints and secrets Use the following procedure to find QuayRegistry endpoints and secrets. Procedure You can examine the QuayRegistry resource, using oc describe quayregistry or oc get quayregistry -o yaml , to find the current endpoints and secrets by entering the following command: USD oc get quayregistry example-registry -n quay-enterprise -o yaml Example output apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: ... name: example-registry namespace: quay-enterprise ... spec: components: - kind: quay managed: true ... - kind: clairpostgres managed: true configBundleSecret: init-config-bundle-secret 1 status: currentVersion: 3.7.0 lastUpdated: 2022-05-11 13:28:38.199476938 +0000 UTC registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org 2 1 The config bundle secret, containing the config.yaml file and any SSL/TLS certificates. 2 The URL for your registry, for browser access to the registry UI, and for the registry API endpoint. 9.3. Downloading the existing configuration The following procedures detail how to download the existing configuration using different strategies. 9.3.1. Using the config bundle secret to download the existing configuration You can use the config bundle secret to download the existing configuration. Procedure Describe the QuayRegistry resource by entering the following command: USD oc describe quayregistry -n <quay_namespace> # ... Config Bundle Secret: example-registry-config-bundle-v123x # ... Obtain the secret data by entering the following command: USD oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}' Example output { "config.yaml": "RkVBVFVSRV9VU0 ... MDAwMAo=" } Decode the data by entering the following command: USD echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode Example output FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_PROXY_CACHE: true FEATURE_BUILD_SUPPORT: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000 Optional. You can export the data into a YAML file into the current directory by passing in the >> config.yaml flag. For example: USD echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml
[ "oc get quayregistry example-registry -n quay-enterprise -o yaml", "apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: quay managed: true - kind: clairpostgres managed: true configBundleSecret: init-config-bundle-secret 1 status: currentVersion: 3.7.0 lastUpdated: 2022-05-11 13:28:38.199476938 +0000 UTC registryEndpoint: https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org 2", "oc describe quayregistry -n <quay_namespace>", "Config Bundle Secret: example-registry-config-bundle-v123x", "oc get secret -n <quay_namespace> <example-registry-config-bundle-v123x> -o jsonpath='{.data}'", "{ \"config.yaml\": \"RkVBVFVSRV9VU0 ... MDAwMAo=\" }", "echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode", "FEATURE_USER_INITIALIZE: true BROWSER_API_CALLS_XHR_ONLY: false SUPER_USERS: - quayadmin FEATURE_USER_CREATION: false FEATURE_QUOTA_MANAGEMENT: true FEATURE_PROXY_CACHE: true FEATURE_BUILD_SUPPORT: true DEFAULT_SYSTEM_REJECT_QUOTA_BYTES: 102400000", "echo 'RkVBVFVSRV9VU0 ... MDAwMAo=' | base64 --decode >> config.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-config-cli
7.4. Recovering Physical Volume Metadata
7.4. Recovering Physical Volume Metadata If the volume group metadata area of a physical volume is accidentally overwritten or otherwise destroyed, you will get an error message indicating that the metadata area is incorrect, or that the system was unable to find a physical volume with a particular UUID. You may be able to recover the data the physical volume by writing a new metadata area on the physical volume specifying the same UUID as the lost metadata. Warning You should not attempt this procedure with a working LVM logical volume. You will lose your data if you specify the incorrect UUID. The following example shows the sort of output you may see if the metadata area is missing or corrupted. You may be able to find the UUID for the physical volume that was overwritten by looking in the /etc/lvm/archive directory. Look in the file VolumeGroupName_xxxx .vg for the last known valid archived LVM metadata for that volume group. Alternately, you may find that deactivating the volume and setting the partial ( -P ) argument will enable you to find the UUID of the missing corrupted physical volume. Use the --uuid and --restorefile arguments of the pvcreate command to restore the physical volume. The following example labels the /dev/sdh1 device as a physical volume with the UUID indicated above, FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk . This command restores the physical volume label with the metadata information contained in VG_00050.vg , the most recent good archived metadata for the volume group. The restorefile argument instructs the pvcreate command to make the new physical volume compatible with the old one on the volume group, ensuring that the new metadata will not be placed where the old physical volume contained data (which could happen, for example, if the original pvcreate command had used the command line arguments that control metadata placement, or if the physical volume was originally created using a different version of the software that used different defaults). The pvcreate command overwrites only the LVM metadata areas and does not affect the existing data areas. You can then use the vgcfgrestore command to restore the volume group's metadata. You can now display the logical volumes. The following commands activate the volumes and display the active volumes. If the on-disk LVM metadata takes as least as much space as what overrode it, this command can recover the physical volume. If what overrode the metadata went past the metadata area, the data on the volume may have been affected. You might be able to use the fsck command to recover that data.
[ "lvs -a -o +devices Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find all physical volumes for volume group VG. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find all physical volumes for volume group VG.", "vgchange -an --partial Partial mode. Incomplete volume groups will be activated read-only. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.", "pvcreate --uuid \"FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk\" --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 Physical volume \"/dev/sdh1\" successfully created", "vgcfgrestore VG Restored volume group VG", "lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe VG -wi--- 300.00G /dev/sdh1 (0),/dev/sda1(0) stripe VG -wi--- 300.00G /dev/sdh1 (34728),/dev/sdb1(0)", "lvchange -ay /dev/VG/stripe lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe VG -wi-a- 300.00G /dev/sdh1 (0),/dev/sda1(0) stripe VG -wi-a- 300.00G /dev/sdh1 (34728),/dev/sdb1(0)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mdatarecover
17.3. DNS Notes
17.3. DNS Notes Wildcards cannot be used when configuring DNS names. Only explicit DNS domain names are supported. The rndc service is not configured by the --setup-dns option. This service must be configured manually after the IdM server is configured.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/dns-notes-setup
Chapter 1. Introduction to Red Hat JBoss Data Grid 6.6.2
Chapter 1. Introduction to Red Hat JBoss Data Grid 6.6.2 Welcome to Red Hat JBoss Data Grid 6.6.2. As you become familiar with the newest version of JBoss Data Grid these Release Notes provide you with information about new features, as well as known and resolved issues. Use this document in conjunction with the entire JBoss Data Grid documentation suite, available at the Red Hat Customer Service Portal's JBoss Data Grid documentation page . Report a bug 1.1. About Red Hat JBoss Data Grid Red Hat's JBoss Data Grid is an open source, distributed, in-memory key/value data store built from the Infinispan open source software project. Whether deployed in client/server mode or embedded in a Java Virtual Machine, it is built to be elastic, high performance, highly available and to scale linearly. JBoss Data Grid is accessible for both Java and Non-Java clients. Using JBoss Data Grid, data is distributed and replicated across a manageable cluster of nodes, optionally written to disk and easily accessible using the REST, Memcached and Hot Rod protocols, or directly in process through a traditional Java Map API. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.2_release_notes/chap-introduction_to_red_hat_jboss_data_grid_6.6.2
2.2. Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System
2.2. Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System This section describes the steps for installing the KVM hypervisor on an existing Red Hat Enterprise Linux 7 system. To install the packages, your machine must be registered and subscribed to the Red Hat Customer Portal. To register using Red Hat Subscription Manager, run the subscription-manager register command and follow the prompts. Alternatively, run the Red Hat Subscription Manager application from Applications System Tools on the desktop to register. If you do not have a valid Red Hat subscription, visit the Red Hat online store to obtain one. For more information on registering and subscribing a system to the Red Hat Customer Portal, see https://access.redhat.com/solutions/253273 . 2.2.1. Installing Virtualization Packages Manually To use virtualization on Red Hat Enterprise Linux, at minimum, you need to install the following packages: qemu-kvm : This package provides the user-level KVM emulator and facilitates communication between hosts and guest virtual machines. qemu-img : This package provides disk management for guest virtual machines. Note The qemu-img package is installed as a dependency of the qemu-kvm package. libvirt : This package provides the server and host-side libraries for interacting with hypervisors and host systems, and the libvirtd daemon that handles the library calls, manages virtual machines, and controls the hypervisor. To install these packages, enter the following command: Several additional virtualization management packages are also available and are recommended when using virtualization: virt-install : This package provides the virt-install command for creating virtual machines from the command line. libvirt-python : This package contains a module that permits applications written in the Python programming language to use the interface supplied by the libvirt API. virt-manager : This package provides the virt-manager tool, also known as Virtual Machine Manager . This is a graphical tool for administering virtual machines. It uses the libvirt-client library as the management API. libvirt-client : This package provides the client-side APIs and libraries for accessing libvirt servers. The libvirt-client package includes the virsh command-line tool to manage and control virtual machines and hypervisors from the command line or a special virtualization shell. You can install all of these recommended virtualization packages with the following command: 2.2.2. Installing Virtualization Package Groups The virtualization packages can also be installed from package groups. You can view the list of available groups by running the yum grouplist hidden commad. Out of the complete list of available package groups, the following table describes the virtualization package groups and what they provide. Table 2.1. Virtualization Package Groups Package Group Description Mandatory Packages Optional Packages Virtualization Hypervisor Smallest possible virtualization host installation libvirt, qemu-kvm, qemu-img qemu-kvm-tools Virtualization Client Clients for installing and managing virtualization instances gnome-boxes, virt-install, virt-manager, virt-viewer, qemu-img virt-top, libguestfs-tools, libguestfs-tools-c Virtualization Platform Provides an interface for accessing and controlling virtual machines and containers libvirt, libvirt-client, virt-who, qemu-img fence-virtd-libvirt, fence-virtd-multicast, fence-virtd-serial, libvirt-cim, libvirt-java, libvirt-snmp, perl-Sys-Virt Virtualization Tools Tools for offline virtual image management libguestfs, qemu-img libguestfs-java, libguestfs-tools, libguestfs-tools-c To install a package group, run the yum group install package_group command. For example, to install the Virtualization Tools package group with all the package types, run: For more information on installing package groups, see How to install a group of packages with yum on Red Hat Enterprise Linux? Knowledgebase article.
[ "yum install qemu-kvm libvirt", "yum install virt-install libvirt-python virt-manager virt-install libvirt-client", "yum group install \"Virtualization Tools\" --setopt=group_package_types=mandatory,default,optional" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-installing_the_virtualization_packages-installing_virtualization_packages_on_an_existing_red_hat_enterprise_linux_system
Chapter 6. Deleting OpenShift Serverless custom resource definitions
Chapter 6. Deleting OpenShift Serverless custom resource definitions After uninstalling the OpenShift Serverless, the Operator and API custom resource definitions (CRDs) remain on the cluster. You can use the following procedure to remove the remaining CRDs. Important Removing the Operator and API CRDs also removes all resources that were defined by using them, including Knative services. 6.1. Removing OpenShift Serverless Operator and API CRDs Delete the Operator and API CRDs using the following procedure. Prerequisites Install the OpenShift CLI ( oc ). You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have uninstalled Knative Serving and removed the OpenShift Serverless Operator. Procedure To delete the remaining OpenShift Serverless CRDs, enter the following command: USD oc get crd -oname | grep 'knative.dev' | xargs oc delete
[ "oc get crd -oname | grep 'knative.dev' | xargs oc delete" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/removing_openshift_serverless/deleting-serverless-crds
Chapter 6. Managing application deployments
Chapter 6. Managing application deployments JBoss EAP features a range of application deployment and configuration options to cater to both administrators and developers. For administrators, the management console and the management CLI offer ideal graphical and command-line interfaces to manage application deployment in a production environment. For developers, the range of application deployment testing options include a configurable file system deployment scanner , the HTTP API , an IDE such as Red Hat CodeReady Studio, and Maven . When deploying applications, you may want to enable validation for deployment descriptors by setting the org.jboss.metadata.parser.validate system property to true . This can be done one of the following ways: While starting the server. By adding it to the server configuration with the following management CLI command. 6.1. Managing application deployment using the management CLI Deploying applications using the management CLI gives you the benefit of a single command-line interface with the ability to create and run deployment scripts. You can use this scripting ability to configure specific application deployment and management scenarios. You can manage the deployments for a single server when running as a standalone server, or an entire network of servers when running in a managed domain. 6.1.1. Managing application deployments in a standalone server 6.1.1.1. Deploying an application to a standalone server using the management CLI You can deploy an application a standalone server using the management CLI by using the deployment deploy-file command. Prerequisites JBoss EAP is running. Procedure Deploy an application packaged as a Web Archive (war) from the management CLI. Syntax Example A successful deployment does not produce any output to the management CLI, but the server log displays deployment messages such as the following output: Similarly, you can use the following deployment commands: Use the deployment deploy-cli-archive to deploy the content from a .cli archive file. A CLI deployment archive is a JAR file with the .cli extension. It contains application archives that should be deployed and the CLI script files, deploy.scr and undeploy.scr , containing commands and operations. One script file, deploy.scr , contains the commands and operations that deploy the application archives and set up the environment; the other script file, undeploy.scr , contains the commands to undeploy the application archives and clean up the environment. Use the deployment deploy-url to deploy the content referenced by a URL. Note When specifying the runtime-name attribute by using the --runtime-name option, you must include the .war extension in the name or the web context will not be registered by JBoss EAP. 6.1.1.2. Undeploying an application from a standalone server using the management CLI You can undeploy an application from a standalone server using the management CLI by using the deployment undeploy command. Undeploying an application deletes the deployment content from the repository. If you want to retain the deployment content when while making the application unavailable, you can disable the deployment instead. For more information, see Disabling an application in a standalone server using the management CLI . Prerequisites JBoss EAP is running. Procedure Undeploy an application by using the management CLI. Syntax Example A successful undeployment does not produce any output to the management CLI, but the server log displays undeployment messages like the following output: Similarly, you can use the deployment undeploy-cli-archive to undeploy the content from a .cli archive file. You can also undeploy all deployments using a wildcard ( * ). 6.1.1.3. Disabling an application in a standalone server using the management CLI You can disable a deployed application without removing the deployment content from the repository. Prerequisites JBoss EAP is running. Procedure You can disable a single application or all applications deployed to JBoss EAP by using the deployment disable command from the management CLI. Disable a single deployment: Syntax Example Disable all the deployments: 6.1.1.4. Enabling an application in a standalone server using the management CLI You can enable a disabled application. Prerequisites JBoss EAP is running. Procedure You can enable a single application or all applications deployed to JBoss EAP by using the deployment enable command from the management CLI. Enable a single deployment: Syntax Example Enable all the deployments: 6.1.1.5. Listing deployments in a standalone server using the management CLI You can list deployments in a standalone server and view deployment information such as the runtime name, status, and so on. Prerequisites JBoss EAP is running. Procedure Use the deployment info command to list deployment information. The output will show details about each deployment, such as the runtime name, status, and whether it is enabled. To display deployment information by name: You can also list all the deployments using the deployment list command. 6.1.2. Managing application deployments in a managed domain 6.1.2.1. Deploying an application to a managed domain using the management CLI You can deploy an application a standalone server using the management CLI by using the deployment deploy-file command and specifying the server groups to which the application should be deployed. Prerequisites JBoss EAP is running as a managed domain. Procedure You can deploy an application packaged as a Web Archive (war) from the management CLI to specific server groups or all the server groups. Deploy the application to specific server groups: Syntax Example Deploy the application to all server groups: Syntax Example A successful deployment does not produce any output to the management CLI, but the server log displays deployment messages for each affected server. Similarly, you can use the following deployment commands: Use the deployment deploy-cli-archive command to deploy the content from a .cli archive file. A CLI deployment archive is a JAR file with the .cli extension. It contains application archives that should be deployed and the CLI script files, deploy.scr and undeploy.scr , containing commands and operations. One script file, deploy.scr , contains the commands and operations that deploy the application archives and set up the environment; the other script file, undeploy.scr , contains the commands to undeploy the application archives and clean up the environment. Use the deployment deploy-url command to deploy the content referenced by a URL. Note When specifying the runtime-name attribute by using the --runtime-name option, you must include the .war extension in the name or the web context will not be registered by JBoss EAP. 6.1.2.2. Undeploying an application from a managed domain using the management CLI You can undeploy an application from JBoss EAP running as a managed domain using the management CLI by using the deployment undeploy command. Undeploying an application deletes the deployment content from the repository. If you want to retain the deployment content when while making the application unavailable, you can disable the deployment instead. For more information, see Disabling an application in a managed domain using the management CLI . Prerequisites JBoss EAP is running as a managed domain. Procedure Undeploy an application from all server groups with that deployment from the management CLI. Syntax Example A successful undeployment does not produce any output to the management CLI, but the server log displays undeployment messages for each affected server like the following output: Similarly, you can use the deployment undeploy-cli-archive command to undeploy the content from a .cli archive file. You can also undeploy all deployments using a wildcard ( * ). 6.1.2.3. Disabling an application in a managed domain using the management CLI You can disable a deployed application from specific server groups and retain its content in the repository for other server groups with that deployment. Prerequisites JBoss EAP is running as a managed domain. Procedure You can disable a single application or all applications deployed to JBoss EAP by using the deployment disable command from the management CLI. Disabe a single application: Syntax Example Disable all the deployments: Syntax Example 6.1.2.4. Enabling an application in a managed domain using the management CLI Enable a disabled deployed application. Prerequisites JBoss EAP is running as a managed domain. Procedure You can enable a single application or all applications deployed to JBoss EAP by using the deployment enable command from the management CLI. Enable a single deployment: Syntax Example Enable all the deployments: Example 6.1.2.5. Listing deployments in a managed domain using the management CLI You can list deployments and view deployment information such as the runtime name, status, and so on. Prerequisites JBoss EAP is running as a managed domain. Procedure Use the deployment info command to list deployment information. The output will list the deployment and its state in each server group. To display deployment information by server group: The output will list the deployments and their state for the specified server group. You can also list all deployments in the domain using the deployment list command. 6.2. Managing application deployment using the management console Deploying applications using the management console gives you the benefit of a graphical interface that is easy to use. You can see at a glance which applications are deployed to your server or server groups, and you can enable, disable or remove applications from the content repository as required. 6.2.1. Application deployment on a standalone server using the management console Deployments can be viewed and managed from the Deployments tab of the JBoss EAP management console. Deploy an Application Click the Add ( + ) button. You can choose to deploy an application by uploading a deployment , adding an unmanaged deployment , or creating an empty deployment . Deployments are enabled by default. Upload a deployment Upload an application that will be copied to the server's content repository and managed by JBoss EAP. Adding an unmanaged deployment Specify the location of a deployment. This deployment will not be copied to the server's content repository and will not be managed by JBoss EAP. Creating an empty deployment Create an empty, exploded deployment. You can add files to the deployment after it has been created. Undeploy an Application Select the deployment and choose the Undeploy option. JBoss EAP removes the deployment from the content repository. Disable an Application Select the deployment and choose the Disable option to disable the application. This undeploys the deployment, but does not remove it from the content repository. Replace an Application Select the deployment and choose the Replace option. Select the new version of the deployment, which must have the same name as the original, and click Finish . This undeploys and removes the original version of the deployment, and then deploys the new version. 6.2.2. Managing application deployment in a managed domain using the management console From the Deployments tab of the JBoss EAP management console, deployments can be viewed and managed by: Content Repository All managed and unmanaged deployments are listed in the Content Repository section. Deployments can be added and deployed to server groups here. Server Groups Deployments that have been deployed to one or more server groups are listed in the Server Groups section. Deployments can be enabled and added directly to a server group here. 6.2.2.1. Adding an application to the content repository using the management console You can add an application to the content repository using the management console. Prerequisites JBoss EAP is running. You have created a user in JBoss EAP. Procedure Log in to the management console. By default the management console is available on http://localhost:9990 . From Content Repository , click the Add button. Choose to add an application by uploading a deployment or adding an unmanaged deployment . Follow the prompts to deploy the application. Note that a deployment must be deployed to a server group before it can be enabled. 6.2.2.2. Deploying an application to a server group using the management console You can deploy an application to a server group using the management console. Prerequisites JBoss EAP is running. You have created a user in JBoss EAP. You have added the application to the content repository. Procedure Log in to the management console. By default the management console is available on http://localhost:9990 . From Content Repository , select a deployment and click the Deploy button. Select one or more server groups to which this deployment should be deployed. Optionally, select the option to enable the deployment on the selected server groups. 6.2.2.3. Undeploying an application from a server group using the management console You can undeploy an application from a server group using the management console. Prerequisites JBoss EAP is running. You have created a user in JBoss EAP. Procedure From Server Groups , select the appropriate server group. Select the desired deployment and click the Undeploy button. Deployments can also be undeployed from multiple server groups at once by selecting the Undeploy button for the deployment in Content Repository . 6.2.2.4. Removing an application from a managed domain using the management console You can remove an application from a managed domain using the management console. Prerequisites JBoss EAP is running. You have created a user in JBoss EAP. Procedure If the deployment is still deployed to any server groups, be sure to undeploy the deployment. From Content Repository , select the deployment and click the Remove button. This removes the deployment from the content repository. 6.2.2.5. Disabling an application in a managed domain using the management console You can disable an application in a managed domain using the management console. Disabling an application only undeploys it from the server but does not remove it from the content repository. Prerequisites JBoss EAP is running. You have created a user in JBoss EAP. Procedure From Server Groups , select the appropriate server group. Select the desired deployment and click the Disable button. This undeploys the deployment, but does not remove it from the content repository. 6.2.2.6. Replacing an application in a managed domain using the management console You can replace an application deployment with its newer version in a managed domain using the management console. Prerequisites JBoss EAP is running. You have created a user in JBoss EAP. Procedure From Content Repository , select the deployment and click the Replace button. Select the new version of the deployment, which must have the same name as the original, and click Replace . This undeploys and removes the original version of the deployment, and then deploys the new version. 6.3. Application deployment using the deployment scanner The deployment scanner monitors the deployment directory for applications to deploy. By default, the deployment scanner scans the EAP_HOME /standalone/deployments/ directory every five seconds for changes. Marker files are used to indicate the status of a deployment and to trigger actions against deployments, such as undeploying or redeploying. While it is recommended to use the management console or management CLI for application deployment in a production environment, deploying using the deployment scanner is provided for the convenience of developers. This allows users to build and test applications in a manner suited for rapid development cycles. Additionally, the deployment scanner should not be used in conjunction with other deployment methods. The deployment scanner is only available when running JBoss EAP as a standalone server. 6.3.1. Application deployment management in a standalone server using the deployment scanner The deployment scanner can be configured to allow or disallow automatic deployment of XML, zipped, and exploded content. If automatic deployment is disabled, you must manually create marker files to trigger deployment actions. For more information about the available marker file types and their purposes, see the Deployment Scanner Marker Files section. By default, automatic deployment for XML and zipped content is enabled. For details on configuring automatic deployment for each content type, see Configure the Deployment Scanner . Warning Deploying using the deployment scanner is provided for the convenience of developers and is not recommended for use in a production environment. It should also not be used in conjunction with other deployment methods. Deploy an Application Copy the content to the deployment folder. If auto-deployment is enabled, this file will be picked up automatically, deployed, and a .deployed marker file will be created. If auto-deployment is not enabled, then you will need to manually add a .dodeploy marker file to trigger deployment. Undeploy an Application Trigger an undeployment by removing the .deployed marker file. If auto-deployment is enabled, you can also remove the test-application.war file, which will trigger the undeployment. Note that this does not apply for exploded deployments. Redeploy an Application Create a .dodeploy marker file to initiate redeployment. 6.3.2. Deployment scanner configuration The deployment scanner can be configured using the management console or the management CLI. You can configure the deployment scanner's behavior, such as the scan interval, deployment folder location, and autodeployment of certain application file types. You can also disable the deployment scanner entirely. For details on all available deployment scanner attributes, see the Deployment Scanner Attributes section. Use the below management CLI commands to configure the default deployment scanner. Disable the Deployment Scanner This disables the default deployment scanner. Change the Scan Interval This updates the scan interval time from 5000 milliseconds (five seconds) to 10000 milliseconds (ten seconds). Change the Deployment Folder This changes the location of the deployment folder from the default location of EAP_HOME /standalone/deployments to /path/to /deployments . The path value will be treated as an absolute path unless the relative-to attribute is specified, in which case it will be relative to that path. Enable the Automatic Deployment of Exploded Content This enables the automatic deployment of exploded content, which is disabled by default. Disable the Automatic Deployment of Zipped Content This disables the automatic deployment of zipped content, which is enabled by default. Disable the Automatic Deployment of XML Content This disables the automatic deployment of XML content, which is enabled by default. 6.3.3. Custom deployment scanner A new deployment scanner can be added using the management CLI or by navigating to the Deployment Scanners subsystem from the Configuration tab in the management console. This will define a new directory to scan for deployments. The default deployment scanner monitors EAP_HOME /standalone/deployments . See Deployment scanner configuration for details on configuring an existing deployment scanner. The following management CLI command adds a new deployment scanner that will check EAP_HOME /standalone/new_deployment_dir every five seconds for deployments. Note The specified directory must already exist or this command will fail with an error. A new deployment scanner has been defined and the specified directory will be monitored for deployments. 6.4. Managing application deployment using Maven Deploying applications using Apache Maven allows you to easily incorporate deployment to JBoss EAP into your existing development workflow. You can use Maven to deploy applications to JBoss EAP using the WildFly Maven Plugin , which provides simple operations to deploy and undeploy applications to the application server. 6.4.1. Managing application deployment on a standalone server using Maven You can deploy and undeploy applications to JBoss EAP running as a standalone server by using the WildFly Maven Plugin. 6.4.1.1. Deploying an application to a standalone server using Maven The following instructions show how to undeploy the JBoss EAP helloworld quickstart to a standalone server using Maven. See Using the Quickstart Examples in the JBoss EAP Getting Started Guide for more information on the JBoss EAP quickstarts. Procedure Initialize the WildFly Maven Plugin in your Maven pom.xml file. This should already be configured in the JBoss EAP quickstart pom.xml files. <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>USD{version.wildfly.maven.plugin}</version> </plugin> From the helloworld quickstart directory, execute the following Maven command. After issuing the Maven command to deploy, the terminal window shows the following output indicating a successful deployment. Verification The deployment can also be confirmed by viewing the server log of the active server instance. 6.4.1.2. Undeploying an application from a standalone server using Maven The following instructions show how to undeploy the JBoss EAP helloworld quickstart to a standalone server using Maven. Prerequisites You have initialized the WildFly Maven Plugin in your Maven pom.xml file. Procedure From the helloworld quickstart directory, execute the following Maven command. After issuing the Maven command to undeploy, the terminal window shows the following output indicating a successful undeployment. Verification The undeployment can also be confirmed by viewing the server log of the active server instance. 6.4.2. Managing application deployment on a managed domain using Maven You can deploy and undeploy applications to JBoss EAP running as a managed domain by using the WildFly Maven Plugin. 6.4.2.1. Deploying an application to a managed domain using Maven The following instructions show how to deploy the JBoss EAP helloworld quickstart in a managed domain using Maven. See Using the Quickstart Examples in the JBoss EAP Getting Started Guide for more information on the JBoss EAP quickstarts. Procedure Specify the server groups to which the application should be deployed in the Maven pom.xml file. The following configuration in the pom.xml initializes the WildFly Maven Plugin and specifies main-server-group as the server group to which the application should be deployed. <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>USD{version.wildfly.maven.plugin}</version> <configuration> <domain> <server-groups> <server-group>main-server-group</server-group> </server-groups> </domain> </configuration> </plugin> From the helloworld quickstart directory, execute the following Maven command. After issuing the Maven command to deploy, the terminal window shows the following output indicating a successful deployment. Verification The deployment can also be confirmed by viewing the server log of the active server instance. 6.4.2.2. Undeploying an application from a managed domain using Maven The following instructions show how to deploy the JBoss EAP helloworld quickstart in a managed domain using Maven. Prerequisites You have initialized the WildFly Maven Plugin. Procedure From the helloworld quickstart directory, execute the following Maven command. After issuing the Maven command to undeploy, the terminal window shows the following output indicating a successful undeployment. Verification The undeployment can also be confirmed by viewing the server log of the active server instance. 6.5. Managing application deployment using the HTTP API Applications can be deployed to JBoss EAP using the HTTP API with the curl command. For more information on using the HTTP API, see the HTTP API section. 6.5.1. Application deployment management on a standalone server using the HTTP API By default, the HTTP API is accessible at http:// HOST : PORT /management , for example, http://localhost:9990/management . Deploy an Application Undeploy an Application See this Red Hat Knowledgebase article to learn more about programmatically generating the JSON requests. 6.5.2. Application deployment management on a managed domain using the HTTP API You can deploy and undeploy applications on a managed domain using the HTTP API. 6.5.2.1. Deploying an application in a managed domain using the HTTP API By default, the HTTP API is accessible at http:// HOST : PORT /management , for example, http://localhost:9990/management . Procedure Add the deployment to the content repository. Add the deployment to the desired server group. Deploy the application to the server group. 6.5.2.2. Undeploying an application in a managed domain using the HTTP API By default, the HTTP API is accessible at http:// HOST : PORT /management , for example, http://localhost:9990/management . Procedure Remove the deployment from all server groups to which it is assigned. Remove the deployment from the content repository. 6.6. Customizing deployment behavior 6.6.1. Custom directory for deployment content You can define a custom location for JBoss EAP to store deployed content. Define a Custom Directory for a Standalone Server By default, deployed content for a standalone server is stored in the EAP_HOME /standalone/data/content directory. This location can be changed by passing in the -Djboss.server.deploy.dir argument when starting the server. The chosen location should be unique among JBoss EAP instances. Note The jboss.server.deploy.dir property specifies the directory to be used for storing content that has been deployed using the management console or management CLI. To define a custom deployment directory to be monitored by the deployment scanner, see Deployment scanner configuration . Define a Custom Directory for a Managed Domain By default, deployed content for a managed domain is stored in the EAP_HOME /domain/data/content directory. This location can be changed by passing in the -Djboss.domain.deployment.dir argument when starting the domain. The chosen location should be unique among JBoss EAP instances. 6.6.2. Control the order of deployments JBoss EAP offers fine-grained control over the order of deployments when the server is started. Strict order of the deployment of applications present in multiple EAR files can be specified along with persistence of the order after a restart. You can use the jboss-all.xml deployment descriptor to declare dependencies between top-level deployments. For example, if you have an app.ear that depends on framework.ear being deployed first, then you can create an app.ear/META-INF/jboss-all.xml file as shown below. <jboss xmlns="urn:jboss:1.0"> <jboss-deployment-dependencies xmlns="urn:jboss:deployment-dependencies:1.0"> <dependency name="framework.ear" /> </jboss-deployment-dependencies> </jboss> Note You can use the deployment's runtime name as the dependency name in the jboss-all.xml file. This ensures that framework.ear is deployed before app.ear . Important If you create a jboss-all.xml file in app.ear and you do not deploy framework.ear , the server attempts to deploy app.ear and fails. 6.6.3. Overriding deployment content 6.6.3.1. About deployment overlay A deployment overlay can be used to overlay content into an existing deployment without physically modifying the contents of the deployment archive. It allows you to override deployment descriptors, library JAR files, classes, Jakarta Server Pages pages, and other files at runtime without rebuilding the archive. This can be useful if you need to adapt a deployment for different environments that need different configurations or settings. For example, when moving a deployment through the application lifecycle from development, to testing, to stage, and finally into production, you might want to swap deployment descriptors, modify static web resources to change the branding of the application, or even replace JAR libraries with different versions depending on the target environment. It is also a useful feature for installations that need to change a configuration but can not modify or crack an archive due to policy or security restrictions. When defining a deployment overlay, you specify the file on a file system that will replace the file in the deployment archive. You must also specify which deployments should be affected by the deployment overlay. Any affected deployments must be redeployed in order for the changes to take effect. Parameters You can use any of the following parameters to configure your deployment overlay: name : The name of the deployment overlay. content : A comma-separated list that maps the file on the file system to the file in the archive that it replaces. The format for each entry is ARCHIVE_PATH = FILESYSTEM_PATH . deployments : Comma-separated list of deployments to which this overlay is linked. redeploy-affected : Redeploys all affected deployments. For full usage details, execute deployment-overlay --help . 6.6.3.2. Defining a deployment overlay You can define a deployment overlay to overlay content into an existing deployment without physically modifying the contents of the deployment archive. Procedure Use the deployment-overlay add management CLI command to add a deployment overlay: Note In a managed domain, specify the applicable server groups by using --server-groups or specify all server groups with --all-server-groups . After you created a deployment overlay, you can add content to an existing overlay, link the overlay to a deployment, or remove the overlay. Optional: You can specify an overlay configuration to link to an external directory that contains static web resources, such as HTML, images, or videos, using the <overlay> element. The <overlay> element specifies static files that overlay the static files of a web application, similar to the procedure of JAR overlays. This element is located in the application file jboss-web.xml . With this element configuration, you do not need to repackage the application. The following example shows system property substitution in the <overlay> element, where {example.path.to.overlay} defines the /PATH/TO/STATIC/WEB/CONTENT location. Example: <overlay> element in a jboss-web.xml file <jboss-web xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd" version="10.0"> <overlay>{example.path.to.overlay}</overlay> </jboss-web> You can specify a system property in the <overlay> element if jboss-descriptor-property-replacement is set to true , which is the default value for the descriptor property. To configure jboss-descriptor-property-replacement , use the following management CLI command: This command adds the following XML content to the ee subsystem in the JBoss EAP configuration: <subsystem xmlns="urn:jboss:domain:ee:4.0"> <jboss-descriptor-property-replacement>true</jboss-descriptor-property-replacement> </subsystem> Note The <overlay> element does not override deployment files that already exist in your EAP project. If multiple <overlay> elements include the same file, the precedence order is determined based on the sequence of the overlay elements in the application jboss-web.xml file. 6.6.4. Using rollout plans In a managed domain, operations targeted at domain or host level resources can potentially impact multiple servers. Such operations can include a roll out plan detailing the sequence in which the operation would be applied to the servers, as well as the policies for detailing whether the operation could be reverted if it fails to execute successfully on some servers. If no rollout plan is specified, the default rollout plan is used. 6.6.4.1. Example rollout plan Below is an example rollout plan involving five server groups. Operations can be applied to server groups serially, in-series , or concurrently, concurrent-groups . For more information, see Rollout plan syntax . Looking at the example above, applying the operation to the servers in the domain is done in three phases. If the policy for any server group triggers a rollback of the operation across the server group, all other server groups will be rolled back as well. Server groups group-A and group-B will have the operation applied concurrently. The operation will be applied to the servers in group-A in series, while all servers in group-B will handle the operation concurrently. If more than 20% of the servers in group-A fail to apply the operation, it will be rolled back across that group. If any servers in group-B fail to apply the operation it will be rolled back across that group. Once all servers in group-A and group-B are complete, the operation will be applied to the servers in group-C . Those servers will handle the operation concurrently. If more than one server in group-C fails to apply the operation it will be rolled back across that group. Once all servers in group-C are complete, server groups group-D and group-E will have the operation applied concurrently. The operation will be applied to the servers in group-D in series, while all servers in group-E will handle the operation concurrently. If more than 20% of the servers in group-D fail to apply the operation, it will be rolled back across that group. If any servers in group-E fail to apply the operation it will be rolled back across that group. 6.6.4.2. Rollout plan syntax You can specify a rollout plan in either of the following ways. By defining the rollout plan in the deploy command operation headers. See Deploying an application using a stored rollout plan for details. By storing the rollout plan using the rollout-plan command and then referencing the plan name in the deploy command operation headers. See Deploying an application using a stored rollout plan for details. Although each method has a different initial command, both methods use the rollout operation header to define the rollout plan. This uses the following syntax. PLAN_NAME is the name for the rollout plan that was stored using the rollout-plan command. SERVER_GROUP_LIST is the list of server groups. Use a comma ( , ) to separate multiple server groups to indicate that operations should be performed on each server group sequentially. Use a caret ( ^ ) separator to indicate that operations should be performed on each server group concurrently. For each server group, set any of the following policies in parentheses. Use a comma to separate multiple policies. rolling-to-servers : A boolean that, if set to true , applies the operation to each server in the group in series. If the value is false or not specified, the operation will be applied to the servers in the group concurrently. max-failed-servers : An integer which takes the maximum number of servers in the group that can fail to apply the operation before it should be reverted on all servers in the group. The default value if not specified is 0 , meaning that a failure on any server will trigger rollback across the group. max-failure-percentage : An integer between 0 and 100 that represents the maximum percentage of the total number of servers in the group that can fail to apply the operation before it should be reverted on all servers in the group. The default value if not specified is 0 , meaning that a failure on any server will trigger rollback across the group. Note If both max-failed-servers and max-failure-percentage are set to non-zero values, max-failure-percentage takes precedence. rollback-across-groups : A boolean that indicates whether the need to rollback the operation on all the servers in one server group triggers a rollback across all the server groups. This defaults to false . 6.6.4.3. Deploy an application using a rollout plan You can provide the full details of a rollout plan directly to the deploy command by passing the rollout settings into the headers argument. See the Rollout Plan Syntax for more information on the format. The following management CLI command deploys an application to the main-server-group server group using a deployment plan that specifies rolling-to-servers=true for serial deployment. 6.6.4.4. Deploying an application using a stored rollout plan Since rollout plans can be complex, you have the option to store the details of a rollout plan. This allows you to reference the rollout plan name when you want to use it instead of requiring the full details of the rollout plan each time. Procedure Use the rollout-plan management CLI command to store a rollout plan. See the Rollout plan syntax for more information on the format. This creates the following deployment plan. Specify the stored rollout plan name when deploying the application. The following management CLI command deploys an application to all server groups using the my-rollout-plan stored rollout plan. 6.6.4.5. Stored rollout plan removal You can remove a stored rollout plan using the rollout-plan management CLI command by specifying the name of the rollout plan to remove. 6.6.4.6. Default rollout plan All operations that impact multiple servers will be executed with a rollout plan. If no rollout plan is specified in the operation request, a default rollout plan will be generated. The plan will have the following characteristics. There will only be a single high-level phase. All server groups affected by the operation will have the operation applied concurrently. Within each server group, the operation will be applied to all servers concurrently. Failure on any server in a server group will cause rollback across the group. Failure of any server group will result in rollback of all other server groups. 6.7. Manage exploded deployments You can manage exploded deployments using the management interfaces. This allows you to change the contents of an exploded application without deploying a new version of the application. Note Updates to static files in a deployment, such as JavaScript and CSS files, take effect immediately. Changes to other files, such as Java classes, might require an application redeployment for the changes to take effect. You can either start with an empty deployment or explode an existing archive deployment and then add or remove content . See Viewing Deployment Content to browse the files in a deployment or read the contents of the files. Create an Empty Exploded Deployment You can create an empty exploded deployment to which you can later add content as necessary. Use the following management CLI command to create an empty exploded deployment. The empty=true option is required to confirm that you intended to create an empty deployment. Explode an Existing Archive Deployment You can explode an existing archive deployment to be able to update its contents. Note that a deployment must be disabled before it can be exploded. Use the following management CLI command to explode a deployment. You can now add or remove content from this deployment. Note You can also explode an existing archive deployment from the management console. From the Deployments tab, select the deployment and select the Explode drop down option. Add Content to an Exploded Deployment To add content to a deployment, use the add-content management CLI operation. Provide the path to the location in the deployment where the content should be added, and provide the content to be uploaded. The content to upload can be provided as a local file stream, URL, hash of content that already exists in the JBoss EAP content repository, or a byte array of the content. The following management CLI command uses the input-stream-index option to upload the contents of a local file to the deployment. Note When adding content to a deployment using the add-content operation, content in the deployment is overwritten by default. You can change this behavior by setting the overwrite option to false . Remove Content from an Exploded Deployment To remove content from a deployment, use the remove-content management CLI operation and provide the path of the content in the deployment to remove. 6.8. Viewing deployment content You can browse information about files in a managed deployment and read the contents of the files using the JBoss EAP management interfaces. 6.8.1. Browse files in a deployment Use the browse-content operation to view the files and directories in a managed deployment. Provide no arguments to return the entire deployment structure or use the path argument to provide the path to a specific directory. Note You can also browse contents of a deployment from the management console by navigating to the Deployments tab, selecting the deployment, and selecting View from the drop down. This displays the files and directories in the META-INF/ directory of the helloworld.war deployment. You can also specify the following arguments to the browse-content operation. archive Whether to only return archive files. depth Specify the depth of files to return. 6.8.2. Read deployment content You can read the contents of a file in a managed deployment using the read-content operation. Provide no arguments to return the entire deployment or use the path argument to provide the path to a specific file. For example: This returns a file stream, which can be displayed in the management CLI or saved to the file system . 6.8.2.1. Display the Contents of a File Use the attachment display command to read the contents of the MANIFEST.MF file. This displays the contents of the MANIFEST.MF file from the helloworld.war deployment to the management CLI. 6.8.2.2. Save the Contents of a File Use the attachment save command to save the contents of the MANIFEST.MF file to the file system. This saves the MANIFEST.MF file from the helloworld.war deployment to the file system at path/to /MANIFEST.MF . If you do not specify a file path using the --file argument, the file will be named using its unique attachment ID and saved in the working directory of the management CLI, which by default is EAP_HOME /bin/ .
[ "EAP_HOME /bin/standalone.sh -Dorg.jboss.metadata.parser.validate=true", "/system-property=org.jboss.metadata.parser.validate:add(value=true)", "deployment deploy-file <path_to_the_application> / <application_name> .war", "deployment deploy-file /my-applications/test-application.war", "WFLYSRV0027: Starting deployment of \"test-application.war\" (runtime-name: \"test-application.war\") WFLYUT0021: Registered web context: /test-application WFLYSRV0010: Deployed \"test-application.war\" (runtime-name : \"test-application.war\")", "deployment undeploy <deployment>", "deployment undeploy test-application.war", "WFLYUT0022: Unregistered web context: /test-application WFLYSRV0028: Stopped deployment test-application.war (runtime-name: test-application.war) in 62ms WFLYSRV0009: Undeployed \"test-application.war\" (runtime-name: \"test-application.war\")", "deployment undeploy *", "deployment disable <deployment>", "deployment disable test-application.war", "deployment disable-all", "deployment enable <deployment>", "deployment enable test-application.war", "deployment enable-all", "deployment info", "NAME RUNTIME-NAME PERSISTENT ENABLED STATUS helloworld.war helloworld.war true true OK test-application.war test-application.war true true OK", "deployment info helloworld.war", "deployment deploy-file <path_to_the_application> / <application_name> .war --server-groups= <server-group_1> ,..., <server-group_1>", "deployment deploy-file /my-applications/test-application.war --server-groups=main-server-group,other-server-group", "deployment deploy-file <path_to_the_application> / <application_name> .war --all-server-groups", "deployment deploy-file /my-applications/test-application.war --all-server-groups", "[Server:server-one] WFLYSRV0027: Starting deployment of \"test-application.war\" (runtime-name: \"test-application.war\") [Server:server-one] WFLYUT0021: Registered web context: /test-application [Server:server-one] WFLYSRV0010: Deployed \"test-application.war\" (runtime-name : \"test-application.war\")", "deployment undeploy <application_name> .war --all-relevant-server-groups", "deployment undeploy test-application.war --all-relevant-server-groups", "[Server:server-one] WFLYUT0022: Unregistered web context: /test-application [Server:server-one] WFLYSRV0028: Stopped deployment test-application.war (runtime-name: test-application.war) in 74ms [Server:server-one] WFLYSRV0009: Undeployed \"test-application.war\" (runtime-name: \"test-application.war\")", "deployment undeploy * --all-relevant-server-groups", "deployment disable <application_name> .war --server-groups= <server-group_1> ,..., <server-group_1>", "deployment disable test-application.war --server-groups=other-server-group", "deployment disable-all --server-groups= <server-group_1> ,..., <server-group_1>", "deployment disable-all --server-groups=other-server-group", "deployment enable <deployment> --server-groups= <server-group_1> ,..., <server-group_1>", "deployment enable test-application.war --server-groups=other-server-group", "deployment enable-all --server-groups= <server-group_1> ,..., <server-group_1>", "deployment enable-all --server-groups=other-server-group", "deployment info helloworld.war", "NAME RUNTIME-NAME helloworld.war helloworld.war SERVER-GROUP STATE main-server-group enabled other-server-group added", "deployment info --server-group=other-server-group", "NAME RUNTIME-NAME STATE helloworld.war helloworld.war added test-application.war test-application.war enabled", "cp /path/to /test-application.war EAP_HOME /standalone/deployments/", "touch EAP_HOME /standalone/deployments/test-application.war.dodeploy", "rm EAP_HOME /standalone/deployments/test-application.war.deployed", "touch EAP_HOME /standalone/deployments/test-application.war.dodeploy", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=scan-enabled,value=false)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=scan-interval,value=10000)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=path,value= /path/to /deployments)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-exploded,value=true)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-zipped,value=false)", "/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-xml,value=false)", "/subsystem=deployment-scanner/scanner=new-scanner:add(path=new_deployment_dir,relative-to=jboss.server.base.dir,scan-interval=5000)", "<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>USD{version.wildfly.maven.plugin}</version> </plugin>", "mvn clean install wildfly:deploy", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.981 s [INFO] Finished at: 2015-12-23T15:06:13-05:00 [INFO] Final Memory: 21M/231M [INFO] ------------------------------------------------------------------------", "WFLYSRV0027: Starting deployment of \"helloworld.war\" (runtime-name: \"helloworld.war\") WFLYUT0021: Registered web context: /helloworld WFLYSRV0010: Deployed \"helloworld.war\" (runtime-name : \"helloworld.war\")", "mvn wildfly:undeploy", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.237 s [INFO] Finished at: 2015-12-23T15:09:10-05:00 [INFO] Final Memory: 10M/183M [INFO] ------------------------------------------------------------------------", "WFLYUT0022: Unregistered web context: /helloworld WFLYSRV0028: Stopped deployment helloworld.war (runtime-name: helloworld.war) in 27ms WFLYSRV0009: Undeployed \"helloworld.war\" (runtime-name: \"helloworld.war\")", "<plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>USD{version.wildfly.maven.plugin}</version> <configuration> <domain> <server-groups> <server-group>main-server-group</server-group> </server-groups> </domain> </configuration> </plugin>", "mvn clean install wildfly:deploy", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 4.005 s [INFO] Finished at: 2016-09-02T14:36:17-04:00 [INFO] Final Memory: 21M/226M [INFO] ------------------------------------------------------------------------", "WFLYSRV0027: Starting deployment of \"helloworld.war\" (runtime-name: \"helloworld.war\") WFLYUT0021: Registered web context: /helloworld WFLYSRV0010: Deployed \"helloworld.war\" (runtime-name : \"helloworld.war\")", "mvn wildfly:undeploy", "[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.750 s [INFO] Finished at: 2016-09-02T14:45:10-04:00 [INFO] Final Memory: 10M/184M [INFO] ------------------------------------------------------------------------", "WFLYUT0022: Unregistered web context: /helloworld WFLYSRV0028: Stopped deployment helloworld.war (runtime-name: helloworld.war) in 106ms WFLYSRV0009: Undeployed \"helloworld.war\" (runtime-name: \"helloworld.war\")", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"composite\", \"address\" : [], \"steps\" : [{\"operation\" : \"add\", \"address\" : {\"deployment\" : \"test-application.war\"}, \"content\" : [{\"url\" : \"file:/path/to/test-application.war\"}]},{\"operation\" : \"deploy\", \"address\" : {\"deployment\" : \"test-application.war\"}}],\"json.pretty\":1}'", "curl --digest -L -D - http:// HOST : PORT /management --header \"Content-Type: application/json\" -u USER : PASSWORD -d '{\"operation\" : \"composite\", \"address\" : [], \"steps\" : [{\"operation\" : \"undeploy\", \"address\" : {\"deployment\" : \"test-application.war\"}},{\"operation\" : \"remove\", \"address\" : {\"deployment\" : \"test-application.war\"}}],\"json.pretty\":1}'", "curl --digest -L -D - http:// <HOST> : <PORT> /management --header \"Content-Type: application/json\" -u <USER> : <PASSWORD> -d '{\"operation\" : \"add\", \"address\" : {\"deployment\" : \"test-application.war\"}, \"content\" : [{\"url\" : \"file: </path/to> /test-application.war\"}],\"json.pretty\":1}'", "curl --digest -L -D - http:// <HOST> : <PORT> /management --header \"Content-Type: application/json\" -u <USER> : <PASSWORD> -d '{\"operation\" : \"add\", \"address\" : {\"server-group\" : \"main-server-group\",\"deployment\":\"test-application.war\"},\"json.pretty\":1}'", "curl --digest -L -D - http:// <HOST> : <PORT> /management --header \"Content-Type: application/json\" -u <USER> : <PASSWORD> -d '{\"operation\" : \"deploy\", \"address\" : {\"server-group\" : \"main-server-group\",\"deployment\":\"test-application.war\"},\"json.pretty\":1}'", "curl --digest -L -D - http:// <HOST> : <PORT> /management --header \"Content-Type: application/json\" -u <USER> : <PASSWORD> -d '{\"operation\" : \"remove\", \"address\" : {\"server-group\" : \"main-server-group\",\"deployment\":\"test-application.war\"},\"json.pretty\":1}'", "curl --digest -L -D - http:// <HOST> : <PORT> /management --header \"Content-Type: application/json\" -u <USER> : <PASSWORD> -d '{\"operation\" : \"remove\", \"address\" : {\"deployment\" : \"test-application.war\"}, \"json.pretty\":1}'", "EAP_HOME /bin/standalone.sh -Djboss.server.deploy.dir= /path/to /new_deployed_content", "EAP_HOME /bin/domain.sh -Djboss.domain.deployment.dir= /path/to /new_deployed_content", "<jboss xmlns=\"urn:jboss:1.0\"> <jboss-deployment-dependencies xmlns=\"urn:jboss:deployment-dependencies:1.0\"> <dependency name=\"framework.ear\" /> </jboss-deployment-dependencies> </jboss>", "deployment-overlay add --name=new-deployment-overlay --content=WEB-INF/web.xml= /path/to /other/web.xml --deployments=test-application.war --redeploy-affected", "<jboss-web xmlns=\"http://www.jboss.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.jboss.com/xml/ns/javaee http://www.jboss.org/j2ee/schema/jboss-web_10_0.xsd\" version=\"10.0\"> <overlay>{example.path.to.overlay}</overlay> </jboss-web>", "/subsystem=ee:write-attribute(name=jboss-descriptor-property-replacement,value=true)", "<subsystem xmlns=\"urn:jboss:domain:ee:4.0\"> <jboss-descriptor-property-replacement>true</jboss-descriptor-property-replacement> </subsystem>", "{\"my-rollout-plan\" => {\"rollout-plan\" => { \"in-series\" => [ {\"concurrent-groups\" => { \"group-A\" => { \"max-failure-percentage\" => \"20\", \"rolling-to-servers\" => \"true\" }, \"group-B\" => undefined }}, {\"server-group\" => {\"group-C\" => { \"rolling-to-servers\" => \"false\", \"max-failed-servers\" => \"1\" }}}, {\"concurrent-groups\" => { \"group-D\" => { \"max-failure-percentage\" => \"20\", \"rolling-to-servers\" => \"true\" }, \"group-E\" => undefined }} ], \"rollback-across-groups\" => \"true\" }}}", "rollout (id= PLAN_NAME | SERVER_GROUP_LIST ) [rollback-across-groups]", "deploy /path/to /test-application.war --server-groups=main-server-group --headers={rollout main-server-group(rolling-to-servers=true)}", "rollout-plan add --name=my-rollout-plan --content={rollout main-server-group(rolling-to-servers=false,max-failed-servers=1),other-server-group(rolling-to-servers=true,max-failure-percentage=20) rollback-across-groups=true}", "\"rollout-plan\" => { \"in-series\" => [ {\"server-group\" => {\"main-server-group\" => { \"rolling-to-servers\" => false, \"max-failed-servers\" => 1 }}}, {\"server-group\" => {\"other-server-group\" => { \"rolling-to-servers\" => true, \"max-failure-percentage\" => 20 }}} ], \"rollback-across-groups\" => true }", "deploy /path/to /test-application.war --all-server-groups --headers={rollout id=my-rollout-plan}", "rollout-plan remove --name=my-rollout-plan", "/deployment= DEPLOYMENT_NAME .war:add(content=[{empty=true}])", "/deployment= ARCHIVE_DEPLOYMENT_NAME .ear:explode", "/deployment= DEPLOYMENT_NAME .war:add-content(content=[{target-path= /path/to/FILE_IN_DEPLOYMENT , input-stream-index= /path/to/LOCAL_FILE_TO_UPLOAD }]", "/deployment= DEPLOYMENT_NAME .war:remove-content(paths=[ /path/to/FILE_1 , /path/to/FILE_2 ])", "/deployment=helloworld.war:browse-content(path=META-INF/)", "{ \"outcome\" => \"success\", \"result\" => [ { \"path\" => \"MANIFEST.MF\", \"directory\" => false, \"file-size\" => 827L }, { \"path\" => \"maven/org.jboss.eap.quickstarts/helloworld/pom.properties\", \"directory\" => false, \"file-size\" => 106L }, { \"path\" => \"maven/org.jboss.eap.quickstarts/helloworld/pom.xml\", \"directory\" => false, \"file-size\" => 2713L }, { \"path\" => \"maven/org.jboss.eap.quickstarts/helloworld/\", \"directory\" => true }, { \"path\" => \"maven/org.jboss.eap.quickstarts/\", \"directory\" => true }, { \"path\" => \"maven/\", \"directory\" => true }, { \"path\" => \"INDEX.LIST\", \"directory\" => false, \"file-size\" => 251L } ] }", "/deployment=helloworld.war:read-content(path=META-INF/MANIFEST.MF)", "{ \"outcome\" => \"success\", \"result\" => {\"uuid\" => \"24ba8e06-21bd-4505-b4d4-bdfb16451b95\"}, \"response-headers\" => {\"attached-streams\" => [{ \"uuid\" => \"24ba8e06-21bd-4505-b4d4-bdfb16451b95\", \"mime-type\" => \"text/plain\" }]} }", "attachment display --operation=/deployment=helloworld.war:read-content(path=META-INF/MANIFEST.MF)", "ATTACHMENT 8af87836-2abd-423a-8e44-e731cc57bd80: Manifest-Version: 1.0 Implementation-Title: Quickstart: helloworld Implementation-Version: 7.4.0.GA Java-Version: 1.8.0_131 Built-By: username Scm-Connection: scm:git:[email protected]:jboss/jboss-parent-pom.git/quic kstart-parent/helloworld Specification-Vendor: JBoss by Red Hat", "attachment save --operation=/deployment=helloworld.war:read-content(path=META-INF/MANIFEST.MF) --file= /path/to /MANIFEST.MF" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuration_guide/managing-application-deployments_default
Part VI. Uninstall Red Hat JBoss Data Grid
Part VI. Uninstall Red Hat JBoss Data Grid
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/part-uninstall_red_hat_jboss_data_grid
function::symdata
function::symdata Name function::symdata - Return the kernel symbol and module offset for the address Synopsis Arguments addr The address to translate Description Returns the (function) symbol name associated with the given address if known, the offset from the start and size of the symbol, plus module name (between brackets). If symbol is unknown, but module is known, the offset inside the module, plus the size of the module is added. If any element is not known it will be omitted and if the symbol name is unknown it will return the hex string for the given address.
[ "symdata:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-symdata
3.4. Exclusive Activation of a Volume Group in a Cluster
3.4. Exclusive Activation of a Volume Group in a Cluster The following procedure configures the LVM volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group's metadata. This procedure modifies the volume_list entry in the /etc/lvm/lvm.conf configuration file. Volume groups listed in the volume_list entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the volume_list entry. Note that this procedure does not require the use of clvmd . Perform the following procedure on each node in the cluster. Execute the following command to ensure that locking_type is set to 1 and that use_lvmetad is set to 0 in the /etc/lvm/lvm.conf file. This command also disables and stops any lvmetad processes immediately. Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example. Add the volume groups other than my_vg (the volume group you have just defined for the cluster) as entries to volume_list in the /etc/lvm/lvm.conf configuration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the volume_list line of the lvm.conf file and add these volume groups as entries to volume_list as follows: Note If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the volume_list entry as volume_list = [] . Rebuild the initramfs boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the initramfs device with the following command. This command may take up to a minute to complete. Reboot the node. Note If you have installed a new Linux kernel since booting the node on which you created the boot image, the new initrd image will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct initrd device is in use by running the uname -r command before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the initrd file after rebooting with the new kernel and then reboot the node. When the node has rebooted, check whether the cluster services have started up again on that node by executing the pcs cluster status command on that node. If this yields the message Error: cluster is not currently running on this node then enter the following command. Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on all of the nodes in the cluster with the following command.
[ "lvmconf --enable-halvm --services --startstopservices", "vgs --noheadings -o vg_name my_vg rhel_home rhel_root", "volume_list = [ \"rhel_root\", \"rhel_home\" ]", "dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)", "pcs cluster start", "pcs cluster start --all" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-exclusiveactivenfs-haaa
Chapter 6. Working with nodes
Chapter 6. Working with nodes 6.1. Viewing and listing the nodes in your OpenShift Container Platform cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 6.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.26.0 node1.example.com Ready worker 7h v1.26.0 node2.example.com Ready worker 7h v1.26.0 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.26.0 node1.example.com NotReady,SchedulingDisabled worker 7h v1.26.0 node2.example.com Ready worker 7h v1.26.0 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.26.0 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.26.0-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.26.0 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.26.0-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.26.0 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.26.0-30.rhaos4.10.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.26.0 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Example output Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.26.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.26.0 Kube-Proxy Version: v1.26.0 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Note The control plane label is not automatically added to newly created or updated master nodes. If you want to use the control plane label for your nodes, you can manually configure the label. For more information, see Understanding how to update labels on nodes in the Additional resources section. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 6.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. Additional resources Understanding how to update labels on nodes 6.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on selected nodes: USD oc get pod --selector=<nodeSelector> USD oc get pod --selector=kubernetes.io/os Or: USD oc get pod -l=<nodeSelector> USD oc get pod -l kubernetes.io/os=linux To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 6.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 6.2. Working with nodes As an administrator, you can perform several tasks to make your clusters more efficient. 6.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.26.0 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 6.2.2. Understanding how to update labels on nodes You can update any label on a node. Node labels are not persisted after a node is deleted even if the node is backed up by a Machine. Note Any change to a MachineSet object is not applied to existing machines owned by the compute machine set. For example, labels edited or added to an existing MachineSet object are not propagated to existing machines and nodes associated with the compute machine set. The following command adds or updates labels on a node: USD oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n> For example: USD oc label nodes webconsole-7f7f6 unhealthy=true Tip You can alternatively apply the following YAML to apply the label: kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #... The following command updates all pods in the namespace: USD oc label pods --all <key_1>=<value_1> For example: USD oc label pods --all status=unhealthy 6.2.3. Understanding how to mark nodes as unschedulable or schedulable By default, healthy nodes with a Ready status are marked as schedulable, which means that you can place new pods on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected. The following command marks a node or nodes as unschedulable: Example output USD oc adm cordon <node> For example: USD oc adm cordon node1.example.com Example output node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled The following command marks a currently unschedulable node or nodes as schedulable: USD oc adm uncordon <node1> Alternatively, instead of specifying specific node names (for example, <node> ), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable. 6.2.4. Handling errors in single-node OpenShift clusters when the node reboots without draining application pods In single-node OpenShift clusters and in OpenShift Container Platform clusters in general, a situation can arise where a node reboot occurs without first draining the node. This can occur where an application pod requesting devices fails with the UnexpectedAdmissionError error. Deployment , ReplicaSet , or DaemonSet errors are reported because the application pods that require those devices start before the pod serving those devices. You cannot control the order of pod restarts. While this behavior is to be expected, it can cause a pod to remain on the cluster even though it has failed to deploy successfully. The pod continues to report UnexpectedAdmissionError . This issue is mitigated by the fact that application pods are typically included in a Deployment , ReplicaSet , or DaemonSet . If a pod is in this error state, it is of little concern because another instance should be running. Belonging to a Deployment , ReplicaSet , or DaemonSet guarantees the successful creation and execution of subsequent pods and ensures the successful deployment of the application. There is ongoing work upstream to ensure that such pods are gracefully terminated. Until that work is resolved, run the following command for a single-node OpenShift cluster to remove the failed pods: USD oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE> Note The option to drain the node is unavailable for single-node OpenShift clusters. Additional resources Understanding how to evacuate pods on nodes 6.2.5. Deleting nodes 6.2.5.1. Deleting nodes from a cluster To delete a node from the OpenShift Container Platform cluster, scale down the appropriate MachineSet object. Important When a cluster is integrated with a cloud provider, you must delete the corresponding machine to delete a node. Do not try to use the oc delete node command for this task. When you delete a node by using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods that are not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Note If you are running cluster on bare metal, you cannot delete a node by editing MachineSet objects. Compute machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <cluster-id>-worker-<aws-region-az> . Scale down the compute machine set by using one of the following methods: Specify the number of replicas to scale down to by running the following command: USD oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api Edit the compute machine set custom resource by running the following command: USD oc edit machineset <machine-set-name> -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... name: <machine-set-name> namespace: openshift-machine-api # ... spec: replicas: 2 1 # ... 1 Specify the number of replicas to scale down to. Additional resources Manually scaling a compute machine set 6.2.5.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 6.3. Managing nodes OpenShift Container Platform uses a KubeletConfig custom resource (CR) to manage the configuration of nodes. By creating an instance of a KubeletConfig object, a managed machine config is created to override setting on the node. Note Logging in to remote machines for the purpose of changing their configuration is not supported. 6.3.1. Modifying nodes To make configuration changes to a cluster, or machine pool, you must create a custom resource definition (CRD), or kubeletConfig object. OpenShift Container Platform uses the Machine Config Controller to watch for changes introduced through the CRD to apply the changes to the cluster. Note Because the fields in a kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the validation of those fields is handled directly by the kubelet itself. Please refer to the relevant Kubernetes documentation for the valid values for these fields. Invalid values in the kubeletConfig object can render cluster nodes unusable. Procedure Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure. Perform one of the following steps: Check current labels of the desired machine config pool. For example: USD oc get machineconfigpool --show-labels Example output NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False Add a custom label to the desired machine config pool. For example: USD oc label machineconfigpool worker custom-kubelet=enabled Create a kubeletconfig custom resource (CR) for your configuration change. For example: Sample configuration for a custom-config CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label to apply the configuration change, this is the label you added to the machine config pool. 3 Specify the new value(s) you want to change. Create the CR object. USD oc create -f <file-name> For example: USD oc create -f master-kube-config.yaml Most Kubelet Configuration options can be set by the user. The following options are not allowed to be overwritten: CgroupDriver ClusterDNS ClusterDomain StaticPodPath Note If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. You can disable the image limit by editing the KubeletConfig object and setting the value of nodeStatusMaxImages to -1 . 6.3.2. Configuring control plane nodes as schedulable You can configure control plane nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. Note You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable field. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Procedure Edit the schedulers.config.openshift.io resource. USD oc edit schedulers.config.openshift.io cluster Configure the mastersSchedulable field. apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-09-10T03:04:05Z" generation: 1 name: cluster resourceVersion: "433" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #... 1 Set to true to allow control plane nodes to be schedulable, or false to disallow control plane nodes to be schedulable. Save the file to apply the changes. 6.3.3. Setting SELinux booleans OpenShift Container Platform allows you to enable and disable an SELinux boolean on a Red Hat Enterprise Linux CoreOS (RHCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup as the example boolean. You can modify this value to whichever boolean you need. Prerequisites You have installed the OpenShift CLI (oc). Procedure Create a new YAML file with a MachineConfig object, displayed in the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #... Create the new MachineConfig object by running the following command: USD oc create -f 99-worker-setsebool.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. 6.3.4. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.26.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.26.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.26.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.26.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.26.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.26.0 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 6.3.5. Enabling swap memory use on nodes Important Enabling swap memory use on nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can enable swap memory use for OpenShift Container Platform workloads on a per-node basis. Warning Enabling swap memory can negatively impact workload performance and out-of-resource handling. Do not enable swap memory on control plane nodes. To enable swap memory, create a kubeletconfig custom resource (CR) to set the swapbehavior parameter. You can set limited or unlimited swap memory: Limited: Use the LimitedSwap value to limit how much swap memory workloads can use. Any workloads on the node that are not managed by OpenShift Container Platform can still use swap memory. The LimitedSwap behavior depends on whether the node is running with Linux control groups version 1 (cgroups v1) or version 2 (cgroup v2) : cgroup v1: OpenShift Container Platform workloads can use any combination of memory and swap, up to the pod's memory limit, if set. cgroup v2: OpenShift Container Platform workloads cannot use swap memory. Unlimited: Use the UnlimitedSwap value to allow workloads to use as much swap memory as they request, up to the system limit. Because the kubelet will not start in the presence of swap memory without this configuration, you must enable swap memory in OpenShift Container Platform before enabling swap memory on the nodes. If there is no swap memory present on a node, enabling swap memory in OpenShift Container Platform has no effect. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.10 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set on the cluster (see Nodes Working with clusters Enabling features using feature gates ). Note Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. If cgroup v2 is enabled on a node, you must enable swap accounting on the node, by setting the swapaccount=1 kernel argument. Procedure Apply a custom label to the machine config pool where you want to allow swap memory. USD oc label machineconfigpool worker kubelet-swap=enabled Create a custom resource (CR) to enable and configure swap settings. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #... 1 Set to false to enable swap memory use on the associated nodes. Set to true to disable swap memory use. 2 Specify the swap memory behavior. If unspecified, the default is LimitedSwap . Enable swap memory on the machines. 6.3.6. Migrating control plane nodes from one RHOSP host to another You can run a script that moves a control plane node from one Red Hat OpenStack Platform (RHOSP) node to another. Prerequisites The environment variable OS_CLOUD refers to a clouds entry that has administrative credentials in a clouds.yaml file. The environment variable KUBECONFIG refers to a configuration that contains administrative OpenShift Container Platform credentials. Procedure From a command line, run the following script: #!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo "Usage: 'USD0 node_name'" exit 64 fi # Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; } # Check for admin OpenShift credentials oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; } set -x declare -r node_name="USD1" declare server_id server_id="USD(openstack server list --all-projects -f value -c ID -c Name | grep "USDnode_name" | cut -d' ' -f1)" readonly server_id # Drain the node oc adm cordon "USDnode_name" oc adm drain "USDnode_name" --delete-emptydir-data --ignore-daemonsets --force # Power off the server oc debug "node/USD{node_name}" -- chroot /host shutdown -h 1 # Verify the server is shut off until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Migrate the node openstack server migrate --wait "USDserver_id" # Resize the VM openstack server resize confirm "USDserver_id" # Wait for the resize confirm to finish until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Restart the VM openstack server start "USDserver_id" # Wait for the node to show up as Ready: until oc get node "USDnode_name" | grep -q "^USD{node_name}[[:space:]]\+Ready"; do sleep 5; done # Uncordon the node oc adm uncordon "USDnode_name" # Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done If the script completes, the control plane machine is migrated to a new RHOSP node. 6.4. Managing the maximum number of pods per node In OpenShift Container Platform, you can configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit or both. If you use both options, the lower of the two limits the number of pods on a node. When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 6.4.1. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 6.5. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. Note Currently, disabling CPU load balancing is not supported by cgroup v2. As a result, you might not get the desired behavior from performance profiles if you have cgroup v2 enabled. Enabling cgroup v2 is not recommended if you are using performance profiles. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 6.5.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.5.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.5.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.5.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 6.6. Remediating, fencing, and maintaining nodes When node-level failures occur, such as the kernel hangs or network interface controllers (NICs) fail, the work required from the cluster does not decrease, and workloads from affected nodes need to be restarted somewhere. Failures affecting these workloads risk data loss, corruption, or both. It is important to isolate the node, known as fencing , before initiating recovery of the workload, known as remediation , and recovery of the node. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 6.7. Understanding node rebooting To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes. Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases. 6.7.1. About rebooting nodes running critical infrastructure When rebooting nodes that host critical OpenShift Container Platform infrastructure components, such as router pods, registry pods, and monitoring pods, ensure that there are at least three nodes available to run these components. The following scenario demonstrates how service interruptions can occur with applications running on OpenShift Container Platform when only two nodes are available: Node A is marked unschedulable and all pods are evacuated. The registry pod running on that node is now redeployed on node B. Node B is now running both registry pods. Node B is now marked unschedulable and is evacuated. The service exposing the two pod endpoints on node B loses all endpoints, for a brief period of time, until they are redeployed to node A. When using three nodes for infrastructure components, this process does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back into rotation does not have a registry pod. One of the other nodes has two registry pods. To schedule the third registry pod on the last node, use pod anti-affinity to prevent the scheduler from locating two registry pods on the same node. Additional information For more information on pod anti-affinity, see Placing pods relative to other pods using affinity and anti-affinity rules . 6.7.2. Rebooting a node using pod anti-affinity Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred. With this in place, if only two infrastructure nodes are available and one is rebooted, the container image registry pod is prevented from running on the other node. oc get pods reports the pod as unready until a suitable node is available. Once a node is available and all pods are back in ready state, the node can be restarted. Procedure To reboot a node using pod anti-affinity: Edit the node specification to configure pod anti-affinity: apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #... 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . This example assumes the container image registry pod has a label of registry=default . Pod anti-affinity can use any Kubernetes match expression. Enable the MatchInterPodAffinity scheduler predicate in the scheduling policy file. Perform a graceful restart of the node. 6.7.3. Understanding how to reboot nodes running routers In most cases, a pod running an OpenShift Container Platform router exposes a host port. The PodFitsPorts scheduler predicate ensures that no router pods using the same port can run on the same node, and pod anti-affinity is achieved. If the routers are relying on IP failover for high availability, there is nothing else that is needed. For router pods relying on an external service such as AWS Elastic Load Balancing for high availability, it is that service's responsibility to react to router pod restarts. In rare cases, a router pod may not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes. 6.7.4. Rebooting a node gracefully Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction Access the node in debug mode: USD oc debug node/<node1> Change your root directory to /host : USD chroot /host Restart the node: USD systemctl reboot In a moment, the node enters the NotReady state. Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and perform the reboot. USD ssh core@<master-node>.<cluster_name>.<base_domain> USD sudo systemctl reboot After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and uncordon it. USD ssh core@<target_node> USD sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional information For information on etcd data backup, see Backing up etcd data . 6.8. Freeing node resources using garbage collection As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection. The OpenShift Container Platform node performs two types of garbage collection: Container garbage collection: Removes terminated containers. Image garbage collection: Removes images not referenced by any running pods. 6.8.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 6.2. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the eviction-pressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. 6.8.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 6.3. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 6.8.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 6.9. Allocating resources for nodes in an OpenShift Container Platform cluster To provide more reliable scheduling and minimize node resource overcommitment, reserve a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy , and the remaining system components, such as sshd and NetworkManager . By specifying the resources to reserve, you provide the scheduler with more information about the remaining CPU and memory resources that a node has available for use by pods. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes or you can manually determine and set the best resources for your nodes. Important To manually set resource values, you must use a kubelet config CR. You cannot use a machine config CR. 6.9.1. Understanding how to allocate resources for nodes CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings: Setting Description kube-reserved This setting is not used with OpenShift Container Platform. Add the CPU and memory resources that you planned to reserve to the system-reserved setting. system-reserved This setting identifies the resources to reserve for the node components and system components, such as CRI-O and Kubelet. The default settings depend on the OpenShift Container Platform and Machine Config Operator versions. Confirm the default systemReserved parameter on the machine-config-operator repository. If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node's capacity as it was before the introduction of allocatable resources. Note Any CPUs specifically reserved using the reservedSystemCPUs parameter are not available for allocation using kube-reserved or system-reserved . 6.9.1.1. How OpenShift Container Platform computes allocated resources An allocated amount of a resource is computed based on the following formula: Note The withholding of Hard-Eviction-Thresholds from Allocatable improves system reliability because the value for Allocatable is enforced for pods at the node level. If Allocatable is negative, it is set to 0 . Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary . 6.9.1.2. How nodes enforce resource constraints The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use. The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons. Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved . Enforcing system-reserved limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce system-reserved only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer. 6.9.1.3. Understanding Eviction Thresholds If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling. You can reserve some memory using the --eviction-hard flag. The node attempts to evict pods whenever memory availability on the node drops below the absolute value or percentage. If system daemons do not exist on a node, pods are limited to the memory capacity - eviction-hard . For this reason, resources set aside as a buffer for eviction before reaching out of memory conditions are not available for pods. The following is an example to illustrate the impact of node allocatable for memory: Node capacity is 32Gi --system-reserved is 3Gi --eviction-hard is set to 100Mi . For this node, the effective node allocatable value is 28.9Gi . If the node and system components use all their reservation, the memory available for pods is 28.9Gi , and kubelet evicts pods when it exceeds this threshold. If you enforce node allocatable, 28.9Gi , with top-level cgroups, then pods can never exceed 28.9Gi . Evictions are not performed unless system daemons consume more than 3.1Gi of memory. If system daemons do not use up all their reservation, with the above example, pods would face memcg OOM kills from their bounding cgroup before node evictions kick in. To better enforce QoS under this situation, the node applies the hard eviction thresholds to the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds . If system daemons do not use up all their reservation, the node will evict pods whenever they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod will be OOM killed if pods consume 29Gi of memory. 6.9.1.4. How the scheduler determines resource availability The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. By default, the node will report its machine capacity as fully schedulable by the cluster. 6.9.2. Automatically allocating resources for nodes OpenShift Container Platform can automatically determine the optimal system-reserved CPU and memory resources for nodes associated with a specific machine config pool and update the nodes with those values when the nodes start. By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . To automatically determine and allocate the system-reserved resources on nodes, create a KubeletConfig custom resource (CR) to set the autoSizingReserved: true parameter. A script on each node calculates the optimal values for the respective reserved resources based on the installed CPU and memory capacity on each node. The script takes into account that increased capacity requires a corresponding increase in the reserved resources. Automatically determining the optimal system-reserved settings ensures that your cluster is running efficiently and prevents node failure due to resource starvation of system components, such as CRI-O and kubelet, without your needing to manually calculate and update the values. This feature is disabled by default. Prerequisites Obtain the label associated with the static MachineConfigPool object for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels . Tip If an appropriate label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change: Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Assign a name to CR. 2 Add the autoSizingReserved parameter set to true to allow OpenShift Container Platform to automatically determine and allocate the system-reserved resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to false . 3 Specify the label from the machine config pool that you configured in the "Prerequisites" section. You can choose any desired labels for the machine config pool, such as custom-kubelet: small-pods , or the default label, pools.operator.machineconfiguration.openshift.io/worker: "" . The example enables automatic resource allocation on all worker nodes. OpenShift Container Platform drains the nodes, applies the kubelet config, and restarts the nodes. Create the CR by entering the following command: USD oc create -f <file_name>.yaml Verification Log in to a node you configured by entering the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: # chroot /host View the /etc/node-sizing.env file: Example output SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08 The kubelet uses the system-reserved values in the /etc/node-sizing.env file. In the example, the worker nodes are allocated 0.08 CPU and 3 Gi of memory. It can take several minutes for the optimal values to appear. 6.9.3. Manually allocating resources for nodes OpenShift Container Platform supports the CPU and memory resource types for allocation. The ephemeral-resource resource type is also supported. For the cpu type, you specify the resource quantity in units of cores, such as 200m , 0.5 , or 1 . For memory and ephemeral-storage , you specify the resource quantity in units of bytes, such as 200Ki , 50Mi , or 5Gi . By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . As an administrator, you can set these values by using a kubelet config custom resource (CR) through a set of <resource_type>=<resource_quantity> pairs (e.g., cpu=200m,memory=512Mi ). Important You must use a kubelet config CR to manually set resource values. You cannot use a machine config CR. For details on the recommended system-reserved values, refer to the recommended system-reserved values . Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the resources to reserve for the node components and system components. Run the following command to create the CR: USD oc create -f <file_name>.yaml 6.10. Allocating specific CPUs for nodes in a cluster When using the static CPU Manager policy , you can reserve specific CPUs for use by specific nodes in your cluster. For example, on a system with 24 CPUs, you could reserve CPUs numbered 0 - 3 for the control plane allowing the compute nodes to use CPUs 4 - 23. 6.10.1. Reserving CPUs for nodes To explicitly define a list of CPUs that are reserved for specific nodes, create a KubeletConfig custom resource (CR) to define the reservedSystemCPUs parameter. This list supersedes the CPUs that might be reserved using the systemReserved parameter. Procedure Obtain the label associated with the machine config pool (MCP) for the type of node you want to configure: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #... 1 Get the MCP label. Create a YAML file for the KubeletConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: "0,1,2,3" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Specify a name for the CR. 2 Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. 3 Specify the label from the MCP. Create the CR object: USD oc create -f <file_name>.yaml Additional resources For more information on the systemReserved parameter, see Allocating resources for nodes in an OpenShift Container Platform cluster . 6.11. Enabling TLS security profiles for the kubelet You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by the kubelet when it is acting as an HTTP server. The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. A TLS security profile defines the TLS ciphers that the Kubernetes API server must use when connecting with the kubelet to protect communication between the kubelet and the Kubernetes API server. Note By default, when the kubelet acts as a client with the Kubernetes API server, it automatically negotiates the TLS parameters with the API server. 6.11.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.4. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.11.2. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: config.openshift.io/v1 kind: KubeletConfig ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" #... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #... 6.12. Machine Config Daemon metrics The Machine Config Daemon is a part of the Machine Config Operator. It runs on every node in the cluster. The Machine Config Daemon manages configuration changes and updates on each of the nodes. 6.12.1. Machine Config Daemon metrics Beginning with OpenShift Container Platform 4.3, the Machine Config Daemon provides a set of metrics. These metrics can be accessed using the Prometheus Cluster Monitoring stack. The following table describes this set of metrics. Some entries contain commands for getting specific logs. However, the most comprehensive set of logs is available using the oc adm must-gather command. Note Metrics marked with * in the Name and Description columns represent serious errors that might cause performance problems. Such problems might prevent updates and upgrades from proceeding. Table 6.5. MCO metrics Name Format Description Notes mcd_host_os_and_version []string{"os", "version"} Shows the OS that MCD is running on, such as RHCOS or RHEL. In case of RHCOS, the version is provided. mcd_drain_err* Logs errors received during failed drain. * While drains might need multiple tries to succeed, terminal failed drains prevent updates from proceeding. The drain_time metric, which shows how much time the drain took, might help with troubleshooting. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_pivot_err* []string{"err", "node", "pivot_target"} Logs errors encountered during pivot. * Pivot errors might prevent OS upgrades from proceeding. For further investigation, run this command to see the logs from the machine-config-daemon container: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_state []string{"state", "reason"} State of Machine Config Daemon for the indicated node. Possible states are "Done", "Working", and "Degraded". In case of "Degraded", the reason is included. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_kubelet_state* Logs kubelet health failures. * This is expected to be empty, with failure count of 0. If failure count exceeds 2, the error indicating threshold is exceeded. This indicates a possible issue with the health of the kubelet. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u kubelet mcd_reboot_err* []string{"message", "err", "node"} Logs the failed reboots and the corresponding errors. * This is expected to be empty, which indicates a successful reboot. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_update_state []string{"config", "err"} Logs success or failure of configuration updates and the corresponding errors. The expected value is rendered-master/rendered-worker-XXXX . If the update fails, an error is present. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon Additional resources Monitoring overview Gathering data about your cluster 6.13. Creating infrastructure nodes Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 6.13.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 6.13.1.1. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets
[ "oc get nodes", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.26.0 node1.example.com Ready worker 7h v1.26.0 node2.example.com Ready worker 7h v1.26.0", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.26.0 node1.example.com NotReady,SchedulingDisabled worker 7h v1.26.0 node2.example.com Ready worker 7h v1.26.0", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.26.0 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.26.0-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.26.0 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.26.0-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.26.0 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.26.0-30.rhaos4.10.gitf2f339d.el8-dev", "oc get node <node>", "oc get node node1.example.com", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.26.0", "oc describe node <node>", "oc describe node node1.example.com", "Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.26.0-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.26.0 Kube-Proxy Version: v1.26.0 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #", "oc get pod --selector=<nodeSelector>", "oc get pod --selector=kubernetes.io/os", "oc get pod -l=<nodeSelector>", "oc get pod -l kubernetes.io/os=linux", "oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%", "oc adm top node --selector=''", "oc adm cordon <node1>", "node/<node1> cordoned", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.26.0", "oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]", "oc adm drain <node1> <node2> --force=true", "oc adm drain <node1> <node2> --grace-period=-1", "oc adm drain <node1> <node2> --ignore-daemonsets=true", "oc adm drain <node1> <node2> --timeout=5s", "oc adm drain <node1> <node2> --delete-emptydir-data=true", "oc adm drain <node1> <node2> --dry-run=true", "oc adm uncordon <node1>", "oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>", "oc label nodes webconsole-7f7f6 unhealthy=true", "kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #", "oc label pods --all <key_1>=<value_1>", "oc label pods --all status=unhealthy", "oc adm cordon <node>", "oc adm cordon node1.example.com", "node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled", "oc adm uncordon <node1>", "oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>", "oc get machinesets -n openshift-machine-api", "oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api", "oc edit machineset <machine-set-name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get machineconfigpool --show-labels", "NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False", "oc label machineconfigpool worker custom-kubelet=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #", "oc create -f <file-name>", "oc create -f master-kube-config.yaml", "oc edit schedulers.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #", "oc create -f 99-worker-setsebool.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.26.0 ip-10-0-136-243.ec2.internal Ready master 34m v1.26.0 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.26.0 ip-10-0-142-249.ec2.internal Ready master 34m v1.26.0 ip-10-0-153-11.ec2.internal Ready worker 28m v1.26.0 ip-10-0-153-150.ec2.internal Ready master 34m v1.26.0", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "oc label machineconfigpool worker kubelet-swap=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #", "#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "oc debug node/<node1>", "chroot /host", "systemctl reboot", "ssh core@<master-node>.<cluster_name>.<base_domain>", "sudo systemctl reboot", "oc adm uncordon <node1>", "ssh core@<target_node>", "sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "chroot /host", "SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #", "oc create -f <file_name>.yaml", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "apiVersion: config.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/nodes/working-with-nodes
Red Hat Ansible Automation Platform Upgrade and Migration Guide
Red Hat Ansible Automation Platform Upgrade and Migration Guide Red Hat Ansible Automation Platform 2.3 Upgrading to the latest version of Ansible Automation Platform and migrating legacy virtual environments to automation execution environments Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/index
Chapter 13. FlowCollector API reference
Chapter 13. FlowCollector API reference FlowCollector is the Schema for the network flows collection API, which pilots and configures the underlying deployments. 13.1. FlowCollector API specifications Description FlowCollector is the schema for the network flows collection API, which pilots and configures the underlying deployments. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers might infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Defines the desired state of the FlowCollector resource. *: the mention of "unsupported" or "deprecated" for a feature throughout this document means that this feature is not officially supported by Red Hat. It might have been, for example, contributed by the community and accepted without a formal agreement for maintenance. The product maintainers might provide some support for these features as a best effort only. 13.1.1. .metadata Description Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Type object 13.1.2. .spec Description Defines the desired state of the FlowCollector resource. *: the mention of "unsupported" or "deprecated" for a feature throughout this document means that this feature is not officially supported by Red Hat. It might have been, for example, contributed by the community and accepted without a formal agreement for maintenance. The product maintainers might provide some support for these features as a best effort only. Type object Property Type Description agent object Agent configuration for flows extraction. consolePlugin object consolePlugin defines the settings related to the OpenShift Container Platform Console plugin, when available. deploymentModel string deploymentModel defines the desired type of deployment for flow processing. Possible values are: - Direct (default) to make the flow processor listen directly from the agents. - Kafka to make flows sent to a Kafka pipeline before consumption by the processor. Kafka can provide better scalability, resiliency, and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka ). exporters array exporters defines additional optional exporters for custom consumption or storage. kafka object Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the spec.deploymentModel is Kafka . loki object loki , the flow store, client settings. namespace string Namespace where Network Observability pods are deployed. networkPolicy object networkPolicy defines ingress network policy settings for Network Observability components isolation. processor object processor defines the settings of the component that receives the flows from the agent, enriches them, generates metrics, and forwards them to the Loki persistence layer and/or any available exporter. prometheus object prometheus defines Prometheus settings, such as querier configuration used to fetch metrics from the Console plugin. 13.1.3. .spec.agent Description Agent configuration for flows extraction. Type object Property Type Description ebpf object ebpf describes the settings related to the eBPF-based flow reporter when spec.agent.type is set to eBPF . type string type [deprecated *] selects the flows tracing agent. Previously, this field allowed to select between eBPF or IPFIX . Only eBPF is allowed now, so this field is deprecated and is planned for removal in a future version of the API. 13.1.4. .spec.agent.ebpf Description ebpf describes the settings related to the eBPF-based flow reporter when spec.agent.type is set to eBPF . Type object Property Type Description advanced object advanced allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. cacheActiveTimeout string cacheActiveTimeout is the max period during which the reporter aggregates flows before sending. Increasing cacheMaxFlows and cacheActiveTimeout can decrease the network traffic overhead and the CPU load, however you can expect higher memory consumption and an increased latency in the flow collection. cacheMaxFlows integer cacheMaxFlows is the max number of flows in an aggregate; when reached, the reporter sends the flows. Increasing cacheMaxFlows and cacheActiveTimeout can decrease the network traffic overhead and the CPU load, however you can expect higher memory consumption and an increased latency in the flow collection. excludeInterfaces array (string) excludeInterfaces contains the interface names that are excluded from flow tracing. An entry enclosed by slashes, such as /br-/ , is matched as a regular expression. Otherwise it is matched as a case-sensitive string. features array (string) List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. Possible values are: - PacketDrop : Enable the packets drop flows logging feature. This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. If the spec.agent.ebpf.privileged parameter is not set, an error is reported. - DNSTracking : Enable the DNS tracking feature. - FlowRTT : Enable flow latency (sRTT) extraction in the eBPF agent from TCP traffic. - NetworkEvents : Enable the network events monitoring feature, such as correlating flows and network policies. This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. It requires using the OVN-Kubernetes network plugin with the Observability feature. IMPORTANT: This feature is available as a Technology Preview. - PacketTranslation : Enable enriching flows with packet translation information, such as Service NAT. - EbpfManager : Unsupported * . Use eBPF Manager to manage Network Observability eBPF programs. Pre-requisite: the eBPF Manager operator (or upstream bpfman operator) must be installed. - UDNMapping : Unsupported *. Enable interfaces mapping to User Defined Networks (UDN). This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. It requires using the OVN-Kubernetes network plugin with the Observability feature. flowFilter object flowFilter defines the eBPF agent configuration regarding flow filtering. imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above interfaces array (string) interfaces contains the interface names from where flows are collected. If empty, the agent fetches all the interfaces in the system, excepting the ones listed in excludeInterfaces . An entry enclosed by slashes, such as /br-/ , is matched as a regular expression. Otherwise it is matched as a case-sensitive string. kafkaBatchSize integer kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB. logLevel string logLevel defines the log level for the Network Observability eBPF Agent metrics object metrics defines the eBPF agent configuration regarding metrics. privileged boolean Privileged mode for the eBPF Agent container. When ignored or set to false , the operator sets granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container. If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF is in use, then you can turn on this mode for more global privileges. Some agent features require the privileged mode, such as packet drops tracking (see features ) and SR-IOV support. resources object resources are the compute resources required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ sampling integer Sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled. 13.1.5. .spec.agent.ebpf.advanced Description advanced allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. Type object Property Type Description env object (string) env allows passing custom environment variables to underlying components. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS , that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. scheduling object scheduling controls how the pods are scheduled on nodes. 13.1.6. .spec.agent.ebpf.advanced.scheduling Description scheduling controls how the pods are scheduled on nodes. Type object Property Type Description affinity object If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . nodeSelector object (string) nodeSelector allows scheduling of pods only onto nodes that have each of the specified labels. For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ . priorityClassName string If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption . If not specified, default priority is used, or zero if there is no default. tolerations array tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . 13.1.7. .spec.agent.ebpf.advanced.scheduling.affinity Description If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type object 13.1.8. .spec.agent.ebpf.advanced.scheduling.tolerations Description tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type array 13.1.9. .spec.agent.ebpf.flowFilter Description flowFilter defines the eBPF agent configuration regarding flow filtering. Type object Property Type Description action string action defines the action to perform on the flows that match the filter. The available options are Accept , which is the default, and Reject . cidr string cidr defines the IP CIDR to filter flows by. Examples: 10.10.10.0/24 or 100:100:100:100::/64 destPorts integer-or-string destPorts optionally defines the destination ports to filter flows by. To filter a single port, set a single port as an integer value. For example, destPorts: 80 . To filter a range of ports, use a "start-end" range in string format. For example, destPorts: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . direction string direction optionally defines a direction to filter flows by. The available options are Ingress and Egress . enable boolean Set enable to true to enable the eBPF flow filtering feature. icmpCode integer icmpCode , for Internet Control Message Protocol (ICMP) traffic, optionally defines the ICMP code to filter flows by. icmpType integer icmpType , for ICMP traffic, optionally defines the ICMP type to filter flows by. peerCIDR string peerCIDR defines the Peer IP CIDR to filter flows by. Examples: 10.10.10.0/24 or 100:100:100:100::/64 peerIP string peerIP optionally defines the remote IP address to filter flows by. Example: 10.10.10.10 . pktDrops boolean pktDrops optionally filters only flows containing packet drops. ports integer-or-string ports optionally defines the ports to filter flows by. It is used both for source and destination ports. To filter a single port, set a single port as an integer value. For example, ports: 80 . To filter a range of ports, use a "start-end" range in string format. For example, ports: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . protocol string protocol optionally defines a protocol to filter flows by. The available options are TCP , UDP , ICMP , ICMPv6 , and SCTP . rules array rules defines a list of filtering rules on the eBPF Agents. When filtering is enabled, by default, flows that don't match any rule are rejected. To change the default, you can define a rule that accepts everything: { action: "Accept", cidr: "0.0.0.0/0" } , and then refine with rejecting rules. Unsupported *. sampling integer sampling sampling rate for the matched flows, overriding the global sampling defined at spec.agent.ebpf.sampling . sourcePorts integer-or-string sourcePorts optionally defines the source ports to filter flows by. To filter a single port, set a single port as an integer value. For example, sourcePorts: 80 . To filter a range of ports, use a "start-end" range in string format. For example, sourcePorts: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . tcpFlags string tcpFlags optionally defines TCP flags to filter flows by. In addition to the standard flags (RFC-9293), you can also filter by one of the three following combinations: SYN-ACK , FIN-ACK , and RST-ACK . 13.1.10. .spec.agent.ebpf.flowFilter.rules Description rules defines a list of filtering rules on the eBPF Agents. When filtering is enabled, by default, flows that don't match any rule are rejected. To change the default, you can define a rule that accepts everything: { action: "Accept", cidr: "0.0.0.0/0" } , and then refine with rejecting rules. Unsupported *. Type array 13.1.11. .spec.agent.ebpf.flowFilter.rules[] Description EBPFFlowFilterRule defines the desired eBPF agent configuration regarding flow filtering rule. Type object Property Type Description action string action defines the action to perform on the flows that match the filter. The available options are Accept , which is the default, and Reject . cidr string cidr defines the IP CIDR to filter flows by. Examples: 10.10.10.0/24 or 100:100:100:100::/64 destPorts integer-or-string destPorts optionally defines the destination ports to filter flows by. To filter a single port, set a single port as an integer value. For example, destPorts: 80 . To filter a range of ports, use a "start-end" range in string format. For example, destPorts: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . direction string direction optionally defines a direction to filter flows by. The available options are Ingress and Egress . icmpCode integer icmpCode , for Internet Control Message Protocol (ICMP) traffic, optionally defines the ICMP code to filter flows by. icmpType integer icmpType , for ICMP traffic, optionally defines the ICMP type to filter flows by. peerCIDR string peerCIDR defines the Peer IP CIDR to filter flows by. Examples: 10.10.10.0/24 or 100:100:100:100::/64 peerIP string peerIP optionally defines the remote IP address to filter flows by. Example: 10.10.10.10 . pktDrops boolean pktDrops optionally filters only flows containing packet drops. ports integer-or-string ports optionally defines the ports to filter flows by. It is used both for source and destination ports. To filter a single port, set a single port as an integer value. For example, ports: 80 . To filter a range of ports, use a "start-end" range in string format. For example, ports: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . protocol string protocol optionally defines a protocol to filter flows by. The available options are TCP , UDP , ICMP , ICMPv6 , and SCTP . sampling integer sampling sampling rate for the matched flows, overriding the global sampling defined at spec.agent.ebpf.sampling . sourcePorts integer-or-string sourcePorts optionally defines the source ports to filter flows by. To filter a single port, set a single port as an integer value. For example, sourcePorts: 80 . To filter a range of ports, use a "start-end" range in string format. For example, sourcePorts: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . tcpFlags string tcpFlags optionally defines TCP flags to filter flows by. In addition to the standard flags (RFC-9293), you can also filter by one of the three following combinations: SYN-ACK , FIN-ACK , and RST-ACK . 13.1.12. .spec.agent.ebpf.metrics Description metrics defines the eBPF agent configuration regarding metrics. Type object Property Type Description disableAlerts array (string) disableAlerts is a list of alerts that should be disabled. Possible values are: NetObservDroppedFlows , which is triggered when the eBPF agent is missing packets or flows, such as when the BPF hashmap is busy or full, or the capacity limiter is being triggered. enable boolean Set enable to false to disable eBPF agent metrics collection. It is enabled by default. server object Metrics server endpoint configuration for the Prometheus scraper. 13.1.13. .spec.agent.ebpf.metrics.server Description Metrics server endpoint configuration for the Prometheus scraper. Type object Property Type Description port integer The metrics server HTTP port. tls object TLS configuration. 13.1.14. .spec.agent.ebpf.metrics.server.tls Description TLS configuration. Type object Required type Property Type Description insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the provided certificate. If set to true , the providedCaFile field is ignored. provided object TLS configuration when type is set to Provided . providedCaFile object Reference to the CA file when type is set to Provided . type string Select the type of TLS configuration: - Disabled (default) to not configure TLS for the endpoint. - Provided to manually provide cert file and a key file. Unsupported *. - Auto to use OpenShift Container Platform auto generated certificate using annotations. 13.1.15. .spec.agent.ebpf.metrics.server.tls.provided Description TLS configuration when type is set to Provided . Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.16. .spec.agent.ebpf.metrics.server.tls.providedCaFile Description Reference to the CA file when type is set to Provided . Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.17. .spec.agent.ebpf.resources Description resources are the compute resources required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 13.1.18. .spec.consolePlugin Description consolePlugin defines the settings related to the OpenShift Container Platform Console plugin, when available. Type object Property Type Description advanced object advanced allows setting some aspects of the internal configuration of the console plugin. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. autoscaler object autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). enable boolean Enables the console plugin deployment. imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above logLevel string logLevel for the console plugin backend portNaming object portNaming defines the configuration of the port-to-service name translation quickFilters array quickFilters configures quick filter presets for the Console plugin replicas integer replicas defines the number of replicas (pods) to start. resources object resources , in terms of compute resources, required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 13.1.19. .spec.consolePlugin.advanced Description advanced allows setting some aspects of the internal configuration of the console plugin. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. Type object Property Type Description args array (string) args allows passing custom arguments to underlying components. Useful for overriding some parameters, such as a URL or a configuration path, that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. env object (string) env allows passing custom environment variables to underlying components. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS , that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. port integer port is the plugin service port. Do not use 9002, which is reserved for metrics. register boolean register allows, when set to true , to automatically register the provided console plugin with the OpenShift Container Platform Console operator. When set to false , you can still register it manually by editing console.operator.openshift.io/cluster with the following command: oc patch console.operator.openshift.io cluster --type='json' -p '[{"op": "add", "path": "/spec/plugins/-", "value": "netobserv-plugin"}]' scheduling object scheduling controls how the pods are scheduled on nodes. 13.1.20. .spec.consolePlugin.advanced.scheduling Description scheduling controls how the pods are scheduled on nodes. Type object Property Type Description affinity object If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . nodeSelector object (string) nodeSelector allows scheduling of pods only onto nodes that have each of the specified labels. For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ . priorityClassName string If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption . If not specified, default priority is used, or zero if there is no default. tolerations array tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . 13.1.21. .spec.consolePlugin.advanced.scheduling.affinity Description If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type object 13.1.22. .spec.consolePlugin.advanced.scheduling.tolerations Description tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type array 13.1.23. .spec.consolePlugin.autoscaler Description autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). Type object 13.1.24. .spec.consolePlugin.portNaming Description portNaming defines the configuration of the port-to-service name translation Type object Property Type Description enable boolean Enable the console plugin port-to-service name translation portNames object (string) portNames defines additional port names to use in the console, for example, portNames: {"3100": "loki"} . 13.1.25. .spec.consolePlugin.quickFilters Description quickFilters configures quick filter presets for the Console plugin Type array 13.1.26. .spec.consolePlugin.quickFilters[] Description QuickFilter defines preset configuration for Console's quick filters Type object Required filter name Property Type Description default boolean default defines whether this filter should be active by default or not filter object (string) filter is a set of keys and values to be set when this filter is selected. Each key can relate to a list of values using a coma-separated string, for example, filter: {"src_namespace": "namespace1,namespace2"} . name string Name of the filter, that is displayed in the Console 13.1.27. .spec.consolePlugin.resources Description resources , in terms of compute resources, required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 13.1.28. .spec.exporters Description exporters defines additional optional exporters for custom consumption or storage. Type array 13.1.29. .spec.exporters[] Description FlowCollectorExporter defines an additional exporter to send enriched flows to. Type object Required type Property Type Description ipfix object IPFIX configuration, such as the IP address and port to send enriched IPFIX flows to. kafka object Kafka configuration, such as the address and topic, to send enriched flows to. openTelemetry object OpenTelemetry configuration, such as the IP address and port to send enriched logs or metrics to. type string type selects the type of exporters. The available options are Kafka , IPFIX , and OpenTelemetry . 13.1.30. .spec.exporters[].ipfix Description IPFIX configuration, such as the IP address and port to send enriched IPFIX flows to. Type object Required targetHost targetPort Property Type Description targetHost string Address of the IPFIX external receiver. targetPort integer Port for the IPFIX external receiver. transport string Transport protocol ( TCP or UDP ) to be used for the IPFIX connection, defaults to TCP . 13.1.31. .spec.exporters[].kafka Description Kafka configuration, such as the address and topic, to send enriched flows to. Type object Required address topic Property Type Description address string Address of the Kafka server sasl object SASL authentication configuration. Unsupported *. tls object TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. topic string Kafka topic to use. It must exist. Network Observability does not create it. 13.1.32. .spec.exporters[].kafka.sasl Description SASL authentication configuration. Unsupported *. Type object Property Type Description clientIDReference object Reference to the secret or config map containing the client ID clientSecretReference object Reference to the secret or config map containing the client secret type string Type of SASL authentication to use, or Disabled if SASL is not used 13.1.33. .spec.exporters[].kafka.sasl.clientIDReference Description Reference to the secret or config map containing the client ID Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.34. .spec.exporters[].kafka.sasl.clientSecretReference Description Reference to the secret or config map containing the client secret Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.35. .spec.exporters[].kafka.tls Description TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.36. .spec.exporters[].kafka.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.37. .spec.exporters[].kafka.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.38. .spec.exporters[].openTelemetry Description OpenTelemetry configuration, such as the IP address and port to send enriched logs or metrics to. Type object Required targetHost targetPort Property Type Description fieldsMapping array Custom fields mapping to an OpenTelemetry conformant format. By default, Network Observability format proposal is used: https://github.com/rhobs/observability-data-model/blob/main/network-observability.md#format-proposal . As there is currently no accepted standard for L3 or L4 enriched network logs, you can freely override it with your own. headers object (string) Headers to add to messages (optional) logs object OpenTelemetry configuration for logs. metrics object OpenTelemetry configuration for metrics. protocol string Protocol of the OpenTelemetry connection. The available options are http and grpc . targetHost string Address of the OpenTelemetry receiver. targetPort integer Port for the OpenTelemetry receiver. tls object TLS client configuration. 13.1.39. .spec.exporters[].openTelemetry.fieldsMapping Description Custom fields mapping to an OpenTelemetry conformant format. By default, Network Observability format proposal is used: https://github.com/rhobs/observability-data-model/blob/main/network-observability.md#format-proposal . As there is currently no accepted standard for L3 or L4 enriched network logs, you can freely override it with your own. Type array 13.1.40. .spec.exporters[].openTelemetry.fieldsMapping[] Description Type object Property Type Description input string multiplier integer output string 13.1.41. .spec.exporters[].openTelemetry.logs Description OpenTelemetry configuration for logs. Type object Property Type Description enable boolean Set enable to true to send logs to an OpenTelemetry receiver. 13.1.42. .spec.exporters[].openTelemetry.metrics Description OpenTelemetry configuration for metrics. Type object Property Type Description enable boolean Set enable to true to send metrics to an OpenTelemetry receiver. pushTimeInterval string Specify how often metrics are sent to a collector. 13.1.43. .spec.exporters[].openTelemetry.tls Description TLS client configuration. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.44. .spec.exporters[].openTelemetry.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.45. .spec.exporters[].openTelemetry.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.46. .spec.kafka Description Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the spec.deploymentModel is Kafka . Type object Required address topic Property Type Description address string Address of the Kafka server sasl object SASL authentication configuration. Unsupported *. tls object TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. topic string Kafka topic to use. It must exist. Network Observability does not create it. 13.1.47. .spec.kafka.sasl Description SASL authentication configuration. Unsupported *. Type object Property Type Description clientIDReference object Reference to the secret or config map containing the client ID clientSecretReference object Reference to the secret or config map containing the client secret type string Type of SASL authentication to use, or Disabled if SASL is not used 13.1.48. .spec.kafka.sasl.clientIDReference Description Reference to the secret or config map containing the client ID Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.49. .spec.kafka.sasl.clientSecretReference Description Reference to the secret or config map containing the client secret Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.50. .spec.kafka.tls Description TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.51. .spec.kafka.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.52. .spec.kafka.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.53. .spec.loki Description loki , the flow store, client settings. Type object Required mode Property Type Description advanced object advanced allows setting some aspects of the internal configuration of the Loki clients. This section is aimed mostly for debugging and fine-grained performance optimizations. enable boolean Set enable to true to store flows in Loki. The Console plugin can use either Loki or Prometheus as a data source for metrics (see also spec.prometheus.querier ), or both. Not all queries are transposable from Loki to Prometheus. Hence, if Loki is disabled, some features of the plugin are disabled as well, such as getting per-pod information or viewing raw flows. If both Prometheus and Loki are enabled, Prometheus takes precedence and Loki is used as a fallback for queries that Prometheus cannot handle. If they are both disabled, the Console plugin is not deployed. lokiStack object Loki configuration for LokiStack mode. This is useful for an easy Loki Operator configuration. It is ignored for other modes. manual object Loki configuration for Manual mode. This is the most flexible configuration. It is ignored for other modes. microservices object Loki configuration for Microservices mode. Use this option when Loki is installed using the microservices deployment mode ( https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#microservices-mode ). It is ignored for other modes. mode string mode must be set according to the installation mode of Loki: - Use LokiStack when Loki is managed using the Loki Operator - Use Monolithic when Loki is installed as a monolithic workload - Use Microservices when Loki is installed as microservices, but without Loki Operator - Use Manual if none of the options above match your setup monolithic object Loki configuration for Monolithic mode. Use this option when Loki is installed using the monolithic deployment mode ( https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#monolithic-mode ). It is ignored for other modes. readTimeout string readTimeout is the maximum console plugin loki query total time limit. A timeout of zero means no timeout. writeBatchSize integer writeBatchSize is the maximum batch size (in bytes) of Loki logs to accumulate before sending. writeBatchWait string writeBatchWait is the maximum time to wait before sending a Loki batch. writeTimeout string writeTimeout is the maximum Loki time connection / request limit. A timeout of zero means no timeout. 13.1.54. .spec.loki.advanced Description advanced allows setting some aspects of the internal configuration of the Loki clients. This section is aimed mostly for debugging and fine-grained performance optimizations. Type object Property Type Description staticLabels object (string) staticLabels is a map of common labels to set on each flow in Loki storage. writeMaxBackoff string writeMaxBackoff is the maximum backoff time for Loki client connection between retries. writeMaxRetries integer writeMaxRetries is the maximum number of retries for Loki client connections. writeMinBackoff string writeMinBackoff is the initial backoff time for Loki client connection between retries. 13.1.55. .spec.loki.lokiStack Description Loki configuration for LokiStack mode. This is useful for an easy Loki Operator configuration. It is ignored for other modes. Type object Required name Property Type Description name string Name of an existing LokiStack resource to use. namespace string Namespace where this LokiStack resource is located. If omitted, it is assumed to be the same as spec.namespace . 13.1.56. .spec.loki.manual Description Loki configuration for Manual mode. This is the most flexible configuration. It is ignored for other modes. Type object Property Type Description authToken string authToken describes the way to get a token to authenticate to Loki. - Disabled does not send any token with the request. - Forward forwards the user token for authorization. - Host [deprecated *] - uses the local pod service account to authenticate to Loki. When using the Loki Operator, this must be set to Forward . ingesterUrl string ingesterUrl is the address of an existing Loki ingester service to push the flows to. When using the Loki Operator, set it to the Loki gateway service with the network tenant set in path, for example https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network . querierUrl string querierUrl specifies the address of the Loki querier service. When using the Loki Operator, set it to the Loki gateway service with the network tenant set in path, for example https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network . statusTls object TLS client configuration for Loki status URL. statusUrl string statusUrl specifies the address of the Loki /ready , /metrics and /config endpoints, in case it is different from the Loki querier URL. If empty, the querierUrl value is used. This is useful to show error messages and some context in the frontend. When using the Loki Operator, set it to the Loki HTTP query frontend service, for example https://loki-query-frontend-http.netobserv.svc:3100/ . statusTLS configuration is used when statusUrl is set. tenantID string tenantID is the Loki X-Scope-OrgID that identifies the tenant for each request. When using the Loki Operator, set it to network , which corresponds to a special tenant mode. tls object TLS client configuration for Loki URL. 13.1.57. .spec.loki.manual.statusTls Description TLS client configuration for Loki status URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.58. .spec.loki.manual.statusTls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.59. .spec.loki.manual.statusTls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.60. .spec.loki.manual.tls Description TLS client configuration for Loki URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.61. .spec.loki.manual.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.62. .spec.loki.manual.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.63. .spec.loki.microservices Description Loki configuration for Microservices mode. Use this option when Loki is installed using the microservices deployment mode ( https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#microservices-mode ). It is ignored for other modes. Type object Property Type Description ingesterUrl string ingesterUrl is the address of an existing Loki ingester service to push the flows to. querierUrl string querierURL specifies the address of the Loki querier service. tenantID string tenantID is the Loki X-Scope-OrgID header that identifies the tenant for each request. tls object TLS client configuration for Loki URL. 13.1.64. .spec.loki.microservices.tls Description TLS client configuration for Loki URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.65. .spec.loki.microservices.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.66. .spec.loki.microservices.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.67. .spec.loki.monolithic Description Loki configuration for Monolithic mode. Use this option when Loki is installed using the monolithic deployment mode ( https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#monolithic-mode ). It is ignored for other modes. Type object Property Type Description tenantID string tenantID is the Loki X-Scope-OrgID header that identifies the tenant for each request. tls object TLS client configuration for Loki URL. url string url is the unique address of an existing Loki service that points to both the ingester and the querier. 13.1.68. .spec.loki.monolithic.tls Description TLS client configuration for Loki URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.69. .spec.loki.monolithic.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.70. .spec.loki.monolithic.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.71. .spec.networkPolicy Description networkPolicy defines ingress network policy settings for Network Observability components isolation. Type object Property Type Description additionalNamespaces array (string) additionalNamespaces contains additional namespaces allowed to connect to the Network Observability namespace. It provides flexibility in the network policy configuration, but if you need a more specific configuration, you can disable it and install your own instead. enable boolean Set enable to true to deploy network policies on the namespaces used by Network Observability (main and privileged). It is disabled by default. These network policies better isolate the Network Observability components to prevent undesired connections to them. To increase the security of connections, enable this option or create your own network policy. 13.1.72. .spec.processor Description processor defines the settings of the component that receives the flows from the agent, enriches them, generates metrics, and forwards them to the Loki persistence layer and/or any available exporter. Type object Property Type Description addZone boolean addZone allows availability zone awareness by labelling flows with their source and destination zones. This feature requires the "topology.kubernetes.io/zone" label to be set on nodes. advanced object advanced allows setting some aspects of the internal configuration of the flow processor. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. clusterName string clusterName is the name of the cluster to appear in the flows data. This is useful in a multi-cluster context. When using OpenShift Container Platform, leave empty to make it automatically determined. deduper object deduper allows you to sample or drop flows identified as duplicates, in order to save on resource usage. Unsupported *. filters array filters lets you define custom filters to limit the amount of generated flows. These filters provide more flexibility than the eBPF Agent filters (in spec.agent.ebpf.flowFilter ), such as allowing to filter by Kubernetes namespace, but with a lesser improvement in performance. Unsupported *. imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above kafkaConsumerAutoscaler object kafkaConsumerAutoscaler is the spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). kafkaConsumerBatchSize integer kafkaConsumerBatchSize indicates to the broker the maximum batch size, in bytes, that the consumer accepts. Ignored when not using Kafka. Default: 10MB. kafkaConsumerQueueCapacity integer kafkaConsumerQueueCapacity defines the capacity of the internal message queue used in the Kafka consumer client. Ignored when not using Kafka. kafkaConsumerReplicas integer kafkaConsumerReplicas defines the number of replicas (pods) to start for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. logLevel string logLevel of the processor runtime logTypes string logTypes defines the desired record types to generate. Possible values are: - Flows to export regular network flows. This is the default. - Conversations to generate events for started conversations, ended conversations as well as periodic "tick" updates. - EndedConversations to generate only ended conversations events. - All to generate both network flows and all conversations events. It is not recommended due to the impact on resources footprint. metrics object Metrics define the processor configuration regarding metrics multiClusterDeployment boolean Set multiClusterDeployment to true to enable multi clusters feature. This adds clusterName label to flows data resources object resources are the compute resources required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ subnetLabels object subnetLabels allows to define custom labels on subnets and IPs or to enable automatic labelling of recognized subnets in OpenShift Container Platform, which is used to identify cluster external traffic. When a subnet matches the source or destination IP of a flow, a corresponding field is added: SrcSubnetLabel or DstSubnetLabel . 13.1.73. .spec.processor.advanced Description advanced allows setting some aspects of the internal configuration of the flow processor. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. Type object Property Type Description conversationEndTimeout string conversationEndTimeout is the time to wait after a network flow is received, to consider the conversation ended. This delay is ignored when a FIN packet is collected for TCP flows (see conversationTerminatingTimeout instead). conversationHeartbeatInterval string conversationHeartbeatInterval is the time to wait between "tick" events of a conversation conversationTerminatingTimeout string conversationTerminatingTimeout is the time to wait from detected FIN flag to end a conversation. Only relevant for TCP flows. dropUnusedFields boolean dropUnusedFields [deprecated *] this setting is not used anymore. enableKubeProbes boolean enableKubeProbes is a flag to enable or disable Kubernetes liveness and readiness probes env object (string) env allows passing custom environment variables to underlying components. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS , that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. healthPort integer healthPort is a collector HTTP port in the Pod that exposes the health check API port integer Port of the flow collector (host port). By convention, some values are forbidden. It must be greater than 1024 and different from 4500, 4789 and 6081. profilePort integer profilePort allows setting up a Go pprof profiler listening to this port scheduling object scheduling controls how the pods are scheduled on nodes. secondaryNetworks array Defines secondary networks to be checked for resources identification. To guarantee a correct identification, indexed values must form an unique identifier across the cluster. If the same index is used by several resources, those resources might be incorrectly labeled. 13.1.74. .spec.processor.advanced.scheduling Description scheduling controls how the pods are scheduled on nodes. Type object Property Type Description affinity object If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . nodeSelector object (string) nodeSelector allows scheduling of pods only onto nodes that have each of the specified labels. For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ . priorityClassName string If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption . If not specified, default priority is used, or zero if there is no default. tolerations array tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . 13.1.75. .spec.processor.advanced.scheduling.affinity Description If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type object 13.1.76. .spec.processor.advanced.scheduling.tolerations Description tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type array 13.1.77. .spec.processor.advanced.secondaryNetworks Description Defines secondary networks to be checked for resources identification. To guarantee a correct identification, indexed values must form an unique identifier across the cluster. If the same index is used by several resources, those resources might be incorrectly labeled. Type array 13.1.78. .spec.processor.advanced.secondaryNetworks[] Description Type object Required index name Property Type Description index array (string) index is a list of fields to use for indexing the pods. They should form a unique Pod identifier across the cluster. Can be any of: MAC , IP , Interface . Fields absent from the 'k8s.v1.cni.cncf.io/network-status' annotation must not be added to the index. name string name should match the network name as visible in the pods annotation 'k8s.v1.cni.cncf.io/network-status'. 13.1.79. .spec.processor.deduper Description deduper allows you to sample or drop flows identified as duplicates, in order to save on resource usage. Unsupported *. Type object Property Type Description mode string Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication because the Agent cannot de-duplicate same flows reported from different nodes. - Use Drop to drop every flow considered as duplicates, allowing saving more on resource usage but potentially losing some information such as the network interfaces used from peer, or network events. - Use Sample to randomly keep only one flow on 50, which is the default, among the ones considered as duplicates. This is a compromise between dropping every duplicate or keeping every duplicate. This sampling action comes in addition to the Agent-based sampling. If both Agent and Processor sampling values are 50 , the combined sampling is 1:2500. - Use Disabled to turn off Processor-based de-duplication. sampling integer sampling is the sampling rate when deduper mode is Sample . 13.1.80. .spec.processor.filters Description filters lets you define custom filters to limit the amount of generated flows. These filters provide more flexibility than the eBPF Agent filters (in spec.agent.ebpf.flowFilter ), such as allowing to filter by Kubernetes namespace, but with a lesser improvement in performance. Unsupported *. Type array 13.1.81. .spec.processor.filters[] Description FLPFilterSet defines the desired configuration for FLP-based filtering satisfying all conditions. Type object Property Type Description allOf array filters is a list of matches that must be all satisfied in order to remove a flow. outputTarget string If specified, these filters only target a single output: Loki , Metrics or Exporters . By default, all outputs are targeted. sampling integer sampling is an optional sampling rate to apply to this filter. 13.1.82. .spec.processor.filters[].allOf Description filters is a list of matches that must be all satisfied in order to remove a flow. Type array 13.1.83. .spec.processor.filters[].allOf[] Description FLPSingleFilter defines the desired configuration for a single FLP-based filter. Type object Required field matchType Property Type Description field string Name of the field to filter on. Refer to the documentation for the list of available fields: https://github.com/netobserv/network-observability-operator/blob/main/docs/flows-format.adoc . matchType string Type of matching to apply. value string Value to filter on. When matchType is Equal or NotEqual , you can use field injection with USD(SomeField) to refer to any other field of the flow. 13.1.84. .spec.processor.kafkaConsumerAutoscaler Description kafkaConsumerAutoscaler is the spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). Type object 13.1.85. .spec.processor.metrics Description Metrics define the processor configuration regarding metrics Type object Property Type Description disableAlerts array (string) disableAlerts is a list of alerts that should be disabled. Possible values are: NetObservNoFlows , which is triggered when no flows are being observed for a certain period. NetObservLokiError , which is triggered when flows are being dropped due to Loki errors. includeList array (string) includeList is a list of metric names to specify which ones to generate. The names correspond to the names in Prometheus without the prefix. For example, namespace_egress_packets_total shows up as netobserv_namespace_egress_packets_total in Prometheus. Note that the more metrics you add, the bigger is the impact on Prometheus workload resources. Metrics enabled by default are: namespace_flows_total , node_ingress_bytes_total , node_egress_bytes_total , workload_ingress_bytes_total , workload_egress_bytes_total , namespace_drop_packets_total (when PacketDrop feature is enabled), namespace_rtt_seconds (when FlowRTT feature is enabled), namespace_dns_latency_seconds (when DNSTracking feature is enabled), namespace_network_policy_events_total (when NetworkEvents feature is enabled). More information, with full list of available metrics: https://github.com/netobserv/network-observability-operator/blob/main/docs/Metrics.md server object Metrics server endpoint configuration for Prometheus scraper 13.1.86. .spec.processor.metrics.server Description Metrics server endpoint configuration for Prometheus scraper Type object Property Type Description port integer The metrics server HTTP port. tls object TLS configuration. 13.1.87. .spec.processor.metrics.server.tls Description TLS configuration. Type object Required type Property Type Description insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the provided certificate. If set to true , the providedCaFile field is ignored. provided object TLS configuration when type is set to Provided . providedCaFile object Reference to the CA file when type is set to Provided . type string Select the type of TLS configuration: - Disabled (default) to not configure TLS for the endpoint. - Provided to manually provide cert file and a key file. Unsupported *. - Auto to use OpenShift Container Platform auto generated certificate using annotations. 13.1.88. .spec.processor.metrics.server.tls.provided Description TLS configuration when type is set to Provided . Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.89. .spec.processor.metrics.server.tls.providedCaFile Description Reference to the CA file when type is set to Provided . Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.90. .spec.processor.resources Description resources are the compute resources required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 13.1.91. .spec.processor.subnetLabels Description subnetLabels allows to define custom labels on subnets and IPs or to enable automatic labelling of recognized subnets in OpenShift Container Platform, which is used to identify cluster external traffic. When a subnet matches the source or destination IP of a flow, a corresponding field is added: SrcSubnetLabel or DstSubnetLabel . Type object Property Type Description customLabels array customLabels allows to customize subnets and IPs labelling, such as to identify cluster-external workloads or web services. If you enable openShiftAutoDetect , customLabels can override the detected subnets in case they overlap. openShiftAutoDetect boolean openShiftAutoDetect allows, when set to true , to detect automatically the machines, pods and services subnets based on the OpenShift Container Platform install configuration and the Cluster Network Operator configuration. Indirectly, this is a way to accurately detect external traffic: flows that are not labeled for those subnets are external to the cluster. Enabled by default on OpenShift Container Platform. 13.1.92. .spec.processor.subnetLabels.customLabels Description customLabels allows to customize subnets and IPs labelling, such as to identify cluster-external workloads or web services. If you enable openShiftAutoDetect , customLabels can override the detected subnets in case they overlap. Type array 13.1.93. .spec.processor.subnetLabels.customLabels[] Description SubnetLabel allows to label subnets and IPs, such as to identify cluster-external workloads or web services. Type object Required cidrs name Property Type Description cidrs array (string) List of CIDRs, such as ["1.2.3.4/32"] . name string Label name, used to flag matching flows. 13.1.94. .spec.prometheus Description prometheus defines Prometheus settings, such as querier configuration used to fetch metrics from the Console plugin. Type object Property Type Description querier object Prometheus querying configuration, such as client settings, used in the Console plugin. 13.1.95. .spec.prometheus.querier Description Prometheus querying configuration, such as client settings, used in the Console plugin. Type object Required mode Property Type Description enable boolean When enable is true , the Console plugin queries flow metrics from Prometheus instead of Loki whenever possible. It is enbaled by default: set it to false to disable this feature. The Console plugin can use either Loki or Prometheus as a data source for metrics (see also spec.loki ), or both. Not all queries are transposable from Loki to Prometheus. Hence, if Loki is disabled, some features of the plugin are disabled as well, such as getting per-pod information or viewing raw flows. If both Prometheus and Loki are enabled, Prometheus takes precedence and Loki is used as a fallback for queries that Prometheus cannot handle. If they are both disabled, the Console plugin is not deployed. manual object Prometheus configuration for Manual mode. mode string mode must be set according to the type of Prometheus installation that stores Network Observability metrics: - Use Auto to try configuring automatically. In OpenShift Container Platform, it uses the Thanos querier from OpenShift Container Platform Cluster Monitoring - Use Manual for a manual setup timeout string timeout is the read timeout for console plugin queries to Prometheus. A timeout of zero means no timeout. 13.1.96. .spec.prometheus.querier.manual Description Prometheus configuration for Manual mode. Type object Property Type Description forwardUserToken boolean Set true to forward logged in user token in queries to Prometheus tls object TLS client configuration for Prometheus URL. url string url is the address of an existing Prometheus service to use for querying metrics. 13.1.97. .spec.prometheus.querier.manual.tls Description TLS client configuration for Prometheus URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.98. .spec.prometheus.querier.manual.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.99. .spec.prometheus.querier.manual.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_observability/flowcollector-api
Chapter 40. ResourceTemplate schema reference
Chapter 40. ResourceTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , JmxTransTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , KafkaNodePoolTemplate , KafkaUserTemplate , ZookeeperClusterTemplate Property Property type Description metadata MetadataTemplate Metadata applied to the resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-resourcetemplate-reference
Chapter 15. Replacing storage devices
Chapter 15. Replacing storage devices 15.1. Replacing operational or failed storage devices on Google Cloud installer-provisioned infrastructure When you need to replace a device in a dynamically created storage cluster on an Google Cloud installer-provisioned infrastructure, you must replace the storage node. For information about how to replace nodes, see: Replacing operational nodes on Google Cloud installer-provisioned infrastructure Replacing failed nodes on Google Cloud installer-provisioned infrastructures .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/replacing-storage-devices_rhodf
Chapter 6. Troubleshooting a multi-site Ceph Object Gateway
Chapter 6. Troubleshooting a multi-site Ceph Object Gateway This chapter contains information on how to fix the most common errors related to multi-site Ceph Object Gateways configuration and operational conditions. Note When the radosgw-admin bucket sync status command reports that the bucket is behind on shards even if the data is consistent across multi-site, run additional writes to the bucket. It synchronizes the status reports and displays a message that the bucket is caught up with source. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. 6.1. Error code definitions for the Ceph Object Gateway The Ceph Object Gateway logs contain error and warning messages to assist in troubleshooting conditions in your environment. Some common ones are listed below with suggested resolutions. Common error messages data_sync: ERROR: a sync operation returned error This is the high-level data sync process complaining that a lower-level bucket sync process returned an error. This message is redundant; the bucket sync error appears above it in the log. data sync: ERROR: failed to sync object: BUCKET_NAME :_OBJECT_NAME_ Either the process failed to fetch the required object over HTTP from a remote gateway or the process failed to write that object to RADOS and it will be tried again. data sync: ERROR: failure in sync, backing out (sync_status=2) A low level message reflecting one of the above conditions, specifically that the data was deleted before it could sync and thus showing a -2 ENOENT status. data sync: ERROR: failure in sync, backing out (sync_status=-5) A low level message reflecting one of the above conditions, specifically that we failed to write that object to RADOS and thus showing a -5 EIO . ERROR: failed to fetch remote data log info: ret=11 This is the EAGAIN generic error code from libcurl reflecting an error condition from another gateway. It will try again by default. meta sync: ERROR: failed to read mdlog info with (2) No such file or directory The shard of the mdlog was never created so there is nothing to sync. Syncing error messages failed to sync object Either the process failed to fetch this object over HTTP from a remote gateway or it failed to write that object to RADOS and it will be tried again. failed to sync bucket instance: (11) Resource temporarily unavailable A connection issue between primary and secondary zones. failed to sync bucket instance: (125) Operation canceled A racing condition exists between writes to the same RADOS object. ERROR: request failed: (13) Permission denied If the realm has been changed on the master zone, the master zone's gateway may need to be restarted to recognize this user While configuring the secondary site, sometimes a rgw realm pull --url http://primary_endpoint --access-key <> --secret <> fails with a permission denied error. In such cases, ensure that on the primary site, the system user credentials are the same via the following commands: Additional Resources Contact Red Hat Support for any additional assistance. 6.2. Syncing a multi-site Ceph Object Gateway A multi-site sync reads the change log from other zones. To get a high-level view of the sync progress from the metadata and the data logs, you can use the following command: Example This command lists which log shards, if any, which are behind their source zone. Note Sometimes you might observe recovering shards when running the radosgw-admin sync status command. For data sync, there are 128 shards of replication logs that are each processed independently. If any of the actions triggered by these replication log events result in any error from the network, storage, or elsewhere, those errors get tracked so the operation can retry again later. While a given shard has errors that need a retry, radosgw-admin sync status command reports that shard as recovering . This recovery happens automatically, so the operator does not need to intervene to resolve them. If the results of the sync status you have run above reports log shards are behind, run the following command substituting the shard-id for X . Syntax Example The output lists which buckets are to sync and which buckets, if any, are going to be retried due to errors. Inspect the status of individual buckets with the following command, substituting the bucket id for X . Syntax Replace X with the ID number of the bucket. The result shows which bucket index log shards are behind their source zone. A common error in sync is EBUSY , which means the sync is already in progress, often on another gateway. Read errors written to the sync error log, which can be read with the following command: The syncing process will try again until it is successful. Errors can still occur that can require intervention. 6.3. Performance counters for multi-site Ceph Object Gateway data sync The following performance counters are available for multi-site configurations of the Ceph Object Gateway to measure data sync: poll_latency measures the latency of requests for remote replication logs. fetch_bytes measures the number of objects and bytes fetched by data sync. Use the ceph --admin-daemon command to view the current metric data for the performance counters: Syntax Example Note You must run the ceph --admin-daemon command from the node running the daemon. Additional Resources See the Ceph performance counters chapter in the Red Hat Ceph Storage Administration Guide for more information about performance counters. 6.4. Synchronizing data in a multi-site Ceph Object Gateway configuration In a multi-site Ceph Object Gateway configuration of a storage cluster, failover and failback causes data synchronization to stop. The radosgw-admin sync status command reports that the data sync is behind for an extended period of time. You can run the radosgw-admin data sync init command to synchronize data between the sites and then restart the Ceph Object Gateway. This command does not touch any actual object data and initiates data sync for a specified source zone. It causes the zone to restart a full sync from the source zone. Important Contact Red Hat support before running the data sync init command. If you are going for a full restart of sync, and if there is a lot of data that needs to be synced on the source zone, then the bandwidth consumption is high and then you have to plan accordingly. Note If a user accidentally deletes a bucket on the secondary site, you can use the metadata sync init command on the site to synchronize data. Prerequisites A running Red Hat Ceph Storage cluster. Ceph Object Gateway configured at two sites at least. Procedure Check the sync status between the sites: Example Synchronize data from the secondary zone: Example Restart all the Ceph Object Gateway daemons at the site: Example
[ "radosgw-admin user info --uid synchronization_user, and radosgw-admin zone get", "radosgw-admin sync status", "radosgw-admin data sync status --shard-id= X --source-zone= ZONE_NAME", "radosgw-admin data sync status --shard-id=27 --source-zone=us-east { \"shard_id\": 27, \"marker\": { \"status\": \"incremental-sync\", \"marker\": \"1_1534494893.816775_131867195.1\", \"next_step_marker\": \"\", \"total_entries\": 1, \"pos\": 0, \"timestamp\": \"0.000000\" }, \"pending_buckets\": [], \"recovering_buckets\": [ \"pro-registry:4ed07bb2-a80b-4c69-aa15-fdc17ae6f5f2.314303.1:26\" ] }", "radosgw-admin bucket sync status --bucket= X .", "radosgw-admin sync error list", "ceph --admin-daemon /var/run/ceph/ceph-client.rgw. RGW_ID .asok perf dump data-sync-from- ZONE_NAME", "ceph --admin-daemon /var/run/ceph/ceph-client.rgw.host02-rgw0.103.94309060818504.asok perf dump data-sync-from-us-west { \"data-sync-from-us-west\": { \"fetch bytes\": { \"avgcount\": 54, \"sum\": 54526039885 }, \"fetch not modified\": 7, \"fetch errors\": 0, \"poll latency\": { \"avgcount\": 41, \"sum\": 2.533653367, \"avgtime\": 0.061796423 }, \"poll errors\": 0 } }", "radosgw-admin sync status realm d713eec8-6ec4-4f71-9eaf-379be18e551b (india) zonegroup ccf9e0b2-df95-4e0a-8933-3b17b64c52b7 (shared) zone 04daab24-5bbd-4c17-9cf5-b1981fd7ff79 (primary) current time 2022-09-15T06:53:52Z zonegroup features enabled: resharding metadata sync no sync (zone is master) data sync source: 596319d2-4ffe-4977-ace1-8dd1790db9fb (secondary) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source", "radosgw-admin data sync init --source-zone primary", "ceph orch restart rgw.myrgw" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/troubleshooting_guide/troubleshooting-a-multisite-ceph-object-gateway
Chapter 14. Starting Kickstart installations
Chapter 14. Starting Kickstart installations You can start Kickstart installations in multiple ways: Automatically by editing the boot options in PXE boot. Automatically by providing the file on a volume with specific name. You can register RHEL using the Red Hat Content Delivery Network (CDN). CDN is a geographically distributed series of web servers. These servers provide, for example, packages and updates to RHEL hosts with a valid subscription. During the installation, registering and installing RHEL from the CDN offers following benefits: Utilizing the latest packages for an up-to-date system immediately after installation and Integrated support for connecting to Red Hat Insights and enabling System Purpose. 14.1. Starting a Kickstart installation automatically using PXE AMD64, Intel 64, and 64-bit ARM systems and IBM Power Systems servers have the ability to boot using a PXE server. When you configure the PXE server, you can add the boot option into the boot loader configuration file, which in turn lets you start the installation automatically. Using this approach, it is possible to automate the installation completely, including the boot process. This procedure is intended as a general reference; detailed steps differ based on your system's architecture, and not all options are available on all architectures (for example, you cannot use PXE boot on 64-bit IBM Z). Prerequisites You have a Kickstart file ready in a location accessible from the system to be installed. You have a PXE server that can be used to boot the system and begin the installation. Procedure Open the boot loader configuration file on your PXE server, and add the inst.ks= boot option to the appropriate line. The name of the file and its syntax depends on your system's architecture and hardware: On AMD64 and Intel 64 systems with BIOS, the file name can be either default or based on your system's IP address. In this case, add the inst.ks= option to the append line in the installation entry. A sample append line in the configuration file looks similar to the following: On systems using the GRUB boot loader (AMD64, Intel 64, and 64-bit ARM systems with UEFI firmware and IBM Power Systems servers), the file name is grub.cfg . In this file, append the inst.ks= option to the kernel line in the installation entry. A sample kernel line in the configuration file will look similar to the following: Boot the installation from the network server. The installation begins now, using the installation options specified in the Kickstart file. If the Kickstart file is valid and contains all required commands, the installation is completely automated. Note If you have installed a Red Hat Enterprise Linux Beta release, on systems having UEFI Secure Boot enabled, then add the Beta public key to the system's Machine Owner Key (MOK) list. Additional resources For information about setting up a PXE server, see Preparing a PXE installation source 14.2. Starting a Kickstart installation automatically using a local volume You can start a Kickstart installation by putting a Kickstart file with a specific name on a specifically labelled storage volume. Prerequisites You have a volume prepared with label OEMDRV and the Kickstart file present in its root as ks.cfg . A drive containing this volume is available on the system as the installation program boots. Procedure Boot the system using a local media (a CD, DVD, or a USB flash drive). At the boot prompt, specify the required boot options. If a required repository is in a network location, you may need to configure the network using the ip= option. The installer tries to configure all network devices using the DHCP protocol by default without this option. In order to access a software source from which necessary packages will be installed, you may need to add the inst.repo= option. If you do not specify this option, you must specify the installation source in the Kickstart file. For more information about installation sources, see Kickstart commands for installation program configuration and flow control . Start the installation by confirming your added boot options. The installation begins now, and the Kickstart file is automatically detected and used to start an automated Kickstart installation. Note If you have installed a Red Hat Enterprise Linux Beta release, on systems having UEFI Secure Boot enabled, then add the Beta public key to the system's Machine Owner Key (MOK) list. For more information about UEFI Secure Boot and Red Hat Enterprise Linux Beta releases, see the UEFI Secure Boot and Beta release requirements . 14.3. Booting the installation on IBM Z to install RHEL in an LPAR 14.3.1. Booting the RHEL installation from an SFTP, FTPS, or FTP server to install in an IBM Z LPAR You can install RHEL into an LPAR by using an SFTP, FTPS, or FTP server. Procedure Log in on the IBM Z Hardware Management Console (HMC) or the Support Element (SE) as a user with sufficient privileges to install a new operating system to an LPAR. On the Systems tab, select the mainframe you want to work with, then on the Partitions tab select the LPAR to which you wish to install. At the bottom of the screen, under Daily , find Operating System Messages . Double-click Operating System Messages to show the text console on which Linux boot messages will appear. Double-click Load from Removable Media or Server . In the dialog box that follows, select SFTP/FTPS/FTP Server , and enter the following information: Host Computer - Host name or IP address of the FTP server you want to install from, for example ftp.redhat.com User ID - Your user name on the FTP server. Or, specify anonymous. Password - Your password. Use your email address if you are logging in as anonymous. File location (optional) - Directory on the FTP server holding the Red Hat Enterprise Linux for IBM Z, for example /rhel/s390x/ . Click Continue . In the dialog that follows, keep the default selection of generic.ins and click Continue . 14.3.2. Booting the RHEL installation from a prepared DASD to install in an IBM Z LPAR Use this procedure when installing Red Hat Enterprise Linux into an LPAR using an already prepared DASD. Procedure Log in on the IBM Z Hardware Management Console (HMC) or the Support Element (SE) as a user with sufficient privileges to install a new operating system to an LPAR. On the Systems tab, select the mainframe you want to work with, then on the Partitions tab select the LPAR to which you wish to install. At the bottom of the screen, under Daily , find Operating System Messages . Double-click Operating System Messages to show the text console on which Linux boot messages will appear. Double-click Load . In the dialog box that follows, select Normal as the Load type . As Load address , fill in the device number of the DASD. Click the OK button. 14.3.3. Booting the RHEL installation from an FCP-attached SCSI disk to install in an IBM Z LPAR Use this procedure when installing Red Hat Enterprise Linux into an LPAR using an already prepared FCP attached SCSI disk. Procedure Log in on the IBM Z Hardware Management Console (HMC) or the Support Element (SE) as a user with sufficient privileges to install a new operating system to an LPAR. On the Systems tab, select the mainframe you want to work with, then on the Partitions tab select the LPAR to which you wish to install. At the bottom of the screen, under Daily , find Operating System Messages . Double-click Operating System Messages to show the text console on which Linux boot messages will appear. Double-click Load . In the dialog box that follows, select SCSI as the Load type . As Load address , fill in the device number of the FCP channel connected with the SCSI disk. As World wide port name , fill in the WWPN of the storage system containing the disk as a 16-digit hexadecimal number. As Logical unit number , fill in the LUN of the disk as a 16-digit hexadecimal number. Leave the Boot record logical block address as 0 and the Operating system specific load parameters empty. Click the OK button. 14.4. Booting the installation on IBM Z to install RHEL in z/VM When installing under z/VM, you can boot from: The z/VM virtual reader A DASD or an FCP-attached SCSI disk prepared with the zipl boot loader 14.4.1. Booting the RHEL installation by using the z/VM Reader You can boot from the z/VM reader. Procedure If necessary, add the device containing the z/VM TCP/IP tools to your CMS disk list. For example: Replace fm with any FILEMODE letter. For a connection to an FTPS server, enter: Where host is the host name or IP address of the FTP server that hosts the boot images ( kernel.img and initrd.img ). Log in and execute the following commands. Use the (repl option if you are overwriting existing kernel.img , initrd.img , generic.prm , or redhat.exec files: Optional: Check whether the files were transferred correctly by using the CMS command filelist to show the received files and their format. It is important that kernel.img and initrd.img have a fixed record length format denoted by F in the Format column and a record length of 80 in the Lrecl column. For example: Press PF3 to quit filelist and return to the CMS prompt. Customize boot parameters in generic.prm as necessary. For details, see Customizing boot parameters . Another way to configure storage and network devices is by using a CMS configuration file. In such a case, add the CMSDASD= and CMSCONFFILE= parameters to generic.prm . Finally, execute the REXX script redhat.exec to boot the installation program: 14.4.2. Booting the RHEL installation by using a prepared DASD Perform the following steps to use a Prepared DASD: Procedure Boot from the prepared DASD and select the zipl boot menu entry referring to the Red Hat Enterprise Linux installation program. Use a command of the following form: Replace DASD_device_number with the device number of the boot device, and boot_entry_number with the zipl configuration menu for this device. For example: 14.4.3. Booting the RHEL installation by using a prepared FCP attached SCSI Disk Perform the following steps to boot from a prepared FCP-attached SCSI disk: Procedure Configure the SCSI boot loader of z/VM to access the prepared SCSI disk in the FCP Storage Area Network. Select the prepared zipl boot menu entry referring to the Red Hat Enterprise Linux installation program. Use a command of the following form: Replace WWPN with the World Wide Port Name of the storage system and LUN with the Logical Unit Number of the disk. The 16-digit hexadecimal numbers must be split into two pairs of eight digits each. For example: Optional: Confirm your settings with the command: Boot the FCP device connected with the storage system containing the disk with the following command: For example: 14.5. Consoles and logging during installation The Red Hat Enterprise Linux installer uses the tmux terminal multiplexer to display and control several windows in addition to the main interface. Each of these windows serve a different purpose; they display several different logs, which can be used to troubleshoot issues during the installation process. One of the windows provides an interactive shell prompt with root privileges, unless this prompt was specifically disabled using a boot option or a Kickstart command. The terminal multiplexer is running in virtual console 1. To switch from the actual installation environment to tmux , press Ctrl + Alt + F1 . To go back to the main installation interface which runs in virtual console 6, press Ctrl + Alt + F6 . During the text mode installation, start in virtual console 1 ( tmux ), and switching to console 6 will open a shell prompt instead of a graphical interface. The console running tmux has five available windows; their contents are described in the following table, along with keyboard shortcuts. Note that the keyboard shortcuts are two-part: first press Ctrl + b , then release both keys, and press the number key for the window you want to use. You can also use Ctrl + b n , Alt+ Tab , and Ctrl + b p to switch to the or tmux window, respectively. Table 14.1. Available tmux windows Shortcut Contents Ctrl + b 1 Main installation program window. Contains text-based prompts (during text mode installation or if you use VNC direct mode), and also some debugging information. Ctrl + b 2 Interactive shell prompt with root privileges. Ctrl + b 3 Installation log; displays messages stored in /tmp/anaconda.log . Ctrl + b 4 Storage log; displays messages related to storage devices and configuration, stored in /tmp/storage.log . Ctrl + b 5 Program log; displays messages from utilities executed during the installation process, stored in /tmp/program.log .
[ "append initrd=initrd.img inst.ks=http://10.32.5.1/mnt/archive/RHEL-9/9.x/x86_64/kickstarts/ks.cfg", "kernel vmlinuz inst.ks=http://10.32.5.1/mnt/archive/RHEL-9/9.x/x86_64/kickstarts/ks.cfg", "cp link tcpmaint 592 592 acc 592 fm", "ftp <host> (secure", "cd / location/of/install-tree /images/ ascii get generic.prm (repl get redhat.exec (repl locsite fix 80 binary get kernel.img (repl get initrd.img (repl quit", "VMUSER FILELIST A0 V 169 Trunc=169 Size=6 Line=1 Col=1 Alt=0 Cmd Filename Filetype Fm Format Lrecl Records Blocks Date Time REDHAT EXEC B1 V 22 1 1 4/15/10 9:30:40 GENERIC PRM B1 V 44 1 1 4/15/10 9:30:32 INITRD IMG B1 F 80 118545 2316 4/15/10 9:30:25 KERNEL IMG B1 F 80 74541 912 4/15/10 9:30:17", "redhat", "cp ipl DASD_device_number loadparm boot_entry_number", "cp ipl eb1c loadparm 0", "cp set loaddev portname WWPN lun LUN bootprog boot_entry_number", "cp set loaddev portname 50050763 050b073d lun 40204011 00000000 bootprog 0", "query loaddev", "cp ipl FCP_device", "cp ipl fc00" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automatically_installing_rhel/starting-kickstart-installations_rhel-installer
Chapter 29. Adding columns to guided decision tables
Chapter 29. Adding columns to guided decision tables After you have created the guided decision table, you can define and add various types of columns within the guided decision tables designer. Prerequisites Any data objects that will be used for column parameters, such as Facts and Fields, have been created within the same package where the guided decision table is found, or have been imported from another package in Data Objects New item of the guided decision tables designer. For descriptions of these column parameters, see the "Required column parameters" segments for each column type in Chapter 30, Types of columns in guided decision tables . For details about creating data objects, see Section 26.1, "Creating data objects" . Procedure In the guided decision tables designer, click Columns Insert Column . Click Include advanced options to view the full list of column options. Figure 29.1. Add columns Select the column type that you want to add, click , and follow the steps in the wizard to specify the data required to add the column. For descriptions of each column type and required parameters for setup, see Chapter 30, Types of columns in guided decision tables . Click Finish to add the configured column. After all columns are added, you can begin adding rows of rules correlating to your columns to complete the decision table. For details, see Chapter 34, Adding rows and defining rules in guided decision tables . The following is an example decision table for a loan application decision service: Figure 29.2. Example of complete guided decision table
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/guided-decision-tables-columns-create-proc
Chapter 10. Creating the data plane for SR-IOV and DPDK environments
Chapter 10. Creating the data plane for SR-IOV and DPDK environments The Red Hat OpenStack Services on OpenShift (RHOSO) data plane consists of RHEL 9.4 nodes. Use the OpenStackDataPlaneNodeSet custom resource definition (CRD) to create the custom resources (CRs) that define the nodes and the layout of the data plane. After you have defined your OpenStackDataPlaneNodeSet CRs, you create an OpenStackDataPlaneDeployment CR that deploys each of your OpenStackDataPlaneNodeSet CRs. An OpenStackDataPlaneNodeSet CR is a logical grouping of nodes of a similar type. A data plane typically consists of multiple OpenStackDataPlaneNodeSet CRs to define groups of nodes with different configurations and roles. You can use pre-provisioned or unprovisioned nodes in an OpenStackDataPlaneNodeSet CR: Pre-provisioned node: You have used your own tooling to install the operating system on the node before adding it to the data plane. Unprovisioned node: The node does not have an operating system installed before you add it to the data plane. The node is provisioned by using the Cluster Baremetal Operator (CBO) as part of the data plane creation and deployment process. Note You cannot include both pre-provisioned and unprovisioned nodes in the same OpenStackDataPlaneNodeSet CR. To create and deploy a data plane, you must perform the following tasks: Create a Secret CR for each node set for Ansible to use to execute commands on the data plane nodes. Create the OpenStackDataPlaneNodeSet CRs that define the nodes and layout of the data plane. Create the OpenStackDataPlaneDeployment CR that triggers the Ansible execution that deploys and configures the software for the specified list of OpenStackDataPlaneNodeSet CRs. The following procedures create two simple node sets, one with pre-provisioned nodes, and one with bare-metal nodes that must be provisioned during the node set deployment. The procedures aim to get you up and running quickly with a data plane environment that you can use to troubleshoot issues and test the environment before adding all the customizations you require. You can add additional node sets to a deployed environment, and you can customize your deployed environment by updating the common configuration in the default ConfigMap CR for the service, and by creating custom services. For more information on how to customize your data plane after deployment, see the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. 10.1. Prerequisites A functional control plane, created with the OpenStack Operator. For more information, see Creating the control plane for NFV environments . You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges. 10.2. Creating the data plane secrets The data plane requires several Secret custom resources (CRs) to operate. The Secret CRs are used by the data plane nodes for the following functionality: To enable secure access between nodes: You must generate an SSH key and create an SSH key Secret CR for each key to enable Ansible to manage the RHEL nodes on the data plane. Ansible executes commands with this user and key. You can create an SSH key for each node set in your data plane. You must generate an SSH key and create an SSH key Secret CR for each key to enable migration of instances between Compute nodes. To register the operating system of the nodes that are not registered to the Red Hat Customer Portal. To enable repositories for the nodes. To provide access to libvirt. Prerequisites Pre-provisioned nodes are configured with an SSH public key in the USDHOME/.ssh/authorized_keys file for a user with passwordless sudo privileges. For information, see Configuring reserved user and group IDs in the RHEL Configuring basic system settings guide. Procedure For unprovisioned nodes, create the SSH key pair for Ansible: Replace <key_file_name> with the name to use for the key pair. Create the Secret CR for Ansible and apply it to the cluster: Replace <key_file_name> with the name and location of your SSH key pair file. Optional: Only include the --from-file=authorized_keys option for bare-metal nodes that must be provisioned when creating the data plane. Create the SSH key pair for instance migration: Create the Secret CR for migration and apply it to the cluster: Create a file on your workstation named secret_subscription.yaml that contains the subscription-manager credentials for registering the operating system of the nodes that are not registered to the Red Hat Customer Portal: Replace <base64_username> and <base64_password> with strings that are base64-encoded. You can use the following command to generate a base64-encoded string: Tip If you don't want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password. Create the Secret CR: Create a Secret CR that contains the Red Hat registry credentials: Replace <username> and <password> with your Red Hat registry username and password credentials. For information about how to create your registry service account, see the Knowledge Base article Creating Registry Service Accounts . Create a file on your workstation named secret_libvirt.yaml to define the libvirt secret: Replace <base64_password> with a base64-encoded string with maximum length 63 characters. You can use the following command to generate a base64-encoded password: Tip If you do not want to base64-encode the username and password, you can use the stringData field instead of the data field to set the username and password. Create the Secret CR: Verify that the Secret CRs are created: 10.3. Creating a custom SR-IOV Compute service You must create a custom SR-IOV Compute service for NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. This service is an Ansible service that is executed on the data plane. This custom service performs the following tasks on the SR-IOV Compute nodes: Applies CPU pinning parameters. Performs PCI passthrough. To create the SR-IOV custom service, you must perform these actions: Create a ConfigMap for CPU pinning that maps a CPU pinning configuration to a specified set of SR-IOV Compute nodes. Create a ConfigMap for PCI passthrough that maps a PCI passthrough configuration to a specified set of SR-IOV Compute nodes. Create the actual SR-IOV custom service that will implement the configMaps on your data plane. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Create a ConfigMap CR that defines configurations for CPU pinning and PCI passthrough, and save it to a YAML file on your workstation, for example, pinning-passthrough.yaml . Change the values (in boldface) as appropriate for your environment: cpu_shared_set : enter a comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy. cpu_dedicated_set : enter a comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. For example, 4-12,^8,15 reserves cores from 4-12 and 15, excluding 8. <network_name_n_> : replace <network_name1> and <network_name2> with the names of the physical networks that your gateways are on. (This network is set in the neutron network provider:*name field.) <number> : replace <number> with the number of NUMA nodes you are using. passthrough_whitelist : specify valid NIC addresses and names for "address" and "physical_network" . Create the ConfigMap object, using the ConfigMap CR file: Example USD oc create -f sriov-pinning-passthru.yaml -n openstack Create an OpenStackDataPlaneService CR that defines the SR-IOV custom service, and save it to a YAML file on your workstation, for example nova-custom-sriov.yaml : Add the ConfigMap CRs to the custom service, and specify the Secret CR for the cell that the node set that runs this service connects to: Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the playbookContents field: playbook : identifies the default playbook available for your service. In this case, it is the Compute service (nova). To see the listing of default playbooks, see https://openstack-k8s-operators.github.io/edpm-ansible/playbooks.html . Create the custom-nova-sriov service: USD oc apply -f nova-custom-sriov.yaml -n openstack Verify that the custom service is created: USD oc get openstackdataplaneservice nova-custom-sriov -o yaml -n openstack 10.4. Creating a custom OVS-DPDK Compute service You must create a custom OVS-DPDK Compute service for NFV in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. This service is an Ansible service that is executed on the data plane. This custom service applies CPU pinning parameters on the OVS-DPDK Compute nodes. To create the SR-IOV custom service, you must perform these actions: Create a ConfigMap for CPU pinning that maps a CPU pinning configuration to a specified set of OVS-DPDK Compute nodes. Create the actual OVS-DPDK custom service that will implement the ConfigMap on your data plane. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Create a ConfigMap CR that defines a configuration for CPU pinning, and save it to a YAML file on your workstation, for example, dpdk-pinning.yaml . Change the values (in boldface) as appropriate for your environment: cpu_shared_set : enter a comma-separated list or range of physical host CPU numbers used to provide vCPU inventory, determine the host CPUs that unpinned instances can be scheduled to, and determine the host CPUs that instance emulator threads should be offloaded to for instances configured with the share emulator thread policy. cpu_dedicated_set : enter a comma-separated list or range of physical host CPU numbers to which processes for pinned instance CPUs can be scheduled. For example, 4-12,^8,15 reserves cores from 4-12 and 15, excluding 8. <network_name_n_> : replace <network_name1> and <network_name2> with the names of the physical networks that your gateways are on. (This network is set in the neutron network provider:*name field.) <number> : replace <number> with the number of NUMA nodes you are using. Create the ConfigMap object, using the ConfigMap CR file: Example USD oc create -f dpdk-pinning.yaml -n openstack Create an OpenStackDataPlaneService CR that defines the OVS-DPDK custom service, and save it to a YAML file on your workstation, for example nova-custom-ovsdpdk.yaml : Add the ConfigMap CR to the custom service, and specify the Secret CR for the cell that the node set that runs this service connects to: Specify the Ansible commands to create the custom service, by referencing an Ansible playbook or by including the Ansible play in the playbookContents field: playbook : identifies the default playbook available for your service. In this case, it is the Compute service (nova). To see the listing of default playbooks, see https://openstack-k8s-operators.github.io/edpm-ansible/playbooks.html . Create the nova-custom-ovsdpdk service: USD oc apply -f nova-custom-ovsdpdk.yaml -n openstack Verify that the custom service is created: USD oc get openstackdataplaneservice nova-custom-ovsdpdk -o yaml -n openstack 10.5. Creating a set of data plane nodes with pre-provisioned nodes Define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of pre-provisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1 . If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. You use the nodeTemplate field to configure the properties that all nodes in an OpenStackDataPlaneNodeSet CR share, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate . Procedure Create a file on your workstation named openstack_preprovisioned_node_set.yaml to define the OpenStackDataPlaneNodeSet CR: 1 The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. Specify that the nodes in this set are pre-provisioned: Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes: Replace <secret-key> with the name of the SSH key Secret CR you created for this node set in Creating the data plane secrets , for example, dataplane-ansible-ssh-private-key-secret . Create a Persistent Volume Claim (PVC) in the openstack namespace on your Red Hat OpenShift Container Platform (RHOCP) cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce . Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . Enable persistent logging for the data plane nodes: Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For information about the properties you can use to configure common node attributes, see OpenStackDataPlaneNodeSet CR spec properties . Register the operating system of the nodes that are not registered to the Red Hat Customer Portal, and enable repositories for your nodes. The following steps demonstrate how to register your nodes to CDN. For details on how to register your nodes with Red Hat Satellite 6.13, see Managing Hosts . Create a Secret CR that contains the subscription-manager credentials: Create a Secret CR that contains the Red Hat registry credentials: Replace <username> and <password> with your Red Hat registry username and password credentials. For information about how to create your registry service account, see the Red Hat Knowledgebase article Creating Registry Service Accounts . Specify the Secret CRs to use to source the usernames and passwords: For a complete list of the Red Hat Customer Portal registration commands, see https://access.redhat.com/solutions/253273 . For information about how to log into registry.redhat.io , see https://access.redhat.com/RegistryAuthentication#creating-registry-service-accounts-6 . Define each node in this node set: 1 The node definition reference, for example, edpm-compute-0 . Each node in the node set must have a node definition. 2 Defines the IPAM and the DNS records for the node. 3 Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR. 4 Node-specific Ansible variables that customize the node. Note Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section. You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many ansibleVars include edpm in the name, which stands for "External Data Plane Management". For more information, see: OpenStackDataPlaneNodeSet CR properties Network interface configuration options Example custom network interfaces for NFV Save the openstack_preprovisioned_node_set.yaml definition file. Create the data plane resources: Verify that the data plane resources have been created: For information about the meaning of the returned status, see Data plane conditions and states . Verify that the Secret resource was created for the node set: Verify the services were created: 10.5.1. Example OpenStackDataPlaneNodeSet CR for pre-provisioned nodes The following example OpenStackDataPlaneNodeSet CR creates a node set from pre-provisioned Compute nodes with some node-specific configuration. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. 10.6. Creating a set of data plane nodes with unprovisioned nodes Define an OpenStackDataPlaneNodeSet custom resource (CR) for each logical grouping of unprovisioned nodes in your data plane, for example, nodes grouped by hardware, location, or networking. You can define as many node sets as necessary for your deployment. Each node can be included in only one OpenStackDataPlaneNodeSet CR. Each node set can be connected to only one Compute cell. By default, node sets are connected to cell1 . If you customize your control plane to include additional Compute cells, you must specify the cell to which the node set is connected. For more information on adding Compute cells, see Connecting an OpenStackDataPlaneNodeSet CR to a Compute cell in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. You use the nodeTemplate field to configure the properties that all nodes in an OpenStackDataPlaneNodeSet CR share, and the nodeTemplate.nodes field for node-specific properties. Node-specific configurations override the inherited values from the nodeTemplate . For more information about provisioning bare-metal nodes, see Planning provisioning for bare-metal data plane nodes in Planning your deployment . Prerequisites Cluster Baremetal Operator (CBO) is installed and configured for provisioning. For more information, see Planning provisioning for bare-metal data plane nodes in Planning your deployment . A BareMetalHost CR is registered and inspected for each bare-metal data plane node. Each bare-metal node must be in the Available state after inspection. For more information about configuring bare-metal nodes, see Bare metal configuration in the Red Hat OpenShift Container Platform (RHOCP) Postinstallation configuration guide. Procedure Create a file on your workstation named openstack_unprovisioned_node_set.yaml to define the OpenStackDataPlaneNodeSet CR: 1 The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. Define the baremetalSetTemplate field to describe the configuration of the bare-metal nodes: Replace <bmh_namespace> with the namespace defined in the corresponding BareMetalHost CR for the node. Replace <ansible_ssh_user> with the username of the Ansible SSH user. Replace <bmh_label> with the label defined in the corresponding BareMetalHost CR for the node. Replace <interface> with the control plane interface the node connects to, for example, enp6s0 . The BMO manages BareMetalHost CRs in the openshift-machine-api namespace by default. You must update the Provisioning CR to watch all namespaces: Add the SSH key secret that you created to enable Ansible to connect to the data plane nodes: Replace <secret-key> with the name of the SSH key Secret CR you created in Creating the data plane secrets , for example, dataplane-ansible-ssh-private-key-secret . Create a Persistent Volume Claim (PVC) in the openstack namespace on your RHOCP cluster to store logs. Set the volumeMode to Filesystem and accessModes to ReadWriteOnce . Do not request storage for logs from a PersistentVolume (PV) that uses the NFS volume plugin. NFS is incompatible with FIFO and the ansible-runner creates a FIFO file to write to store logs. For information about PVCs, see Understanding persistent storage in the RHOCP Storage guide and Red Hat OpenShift Container Platform cluster requirements in Planning your deployment . Enable persistent logging for the data plane nodes: Replace <pvc_name> with the name of the PVC storage on your RHOCP cluster. Add the common configuration for the set of nodes in this group under the nodeTemplate section. Each node in this OpenStackDataPlaneNodeSet inherits this configuration. For more information, see: OpenStackDataPlaneNodeSet CR properties Network interface configuration options Example custom network interfaces for NFV Define each node in this node set: 1 The node definition reference, for example, edpm-compute-0 . Each node in the node set must have a node definition. 2 Defines the IPAM and the DNS records for the node. 3 Specifies a predictable IP address for the network that must be in the allocation range defined for the network in the NetConfig CR. 4 Node-specific Ansible variables that customize the node. Note Nodes defined within the nodes section can configure the same Ansible variables that are configured in the nodeTemplate section. Where an Ansible variable is configured for both a specific node and within the nodeTemplate section, the node-specific values override those from the nodeTemplate section. You do not need to replicate all the nodeTemplate Ansible variables for a node to override the default and set some node-specific values. You only need to configure the Ansible variables you want to override for the node. Many ansibleVars include edpm in the name, which stands for "External Data Plane Management". For information about the properties you can use to configure node attributes, see OpenStackDataPlaneNodeSet CR properties . Save the openstack_unprovisioned_node_set.yaml definition file. Create the data plane resources: Verify that the data plane resources have been created: For information on the meaning of the returned status, see Data plane conditions and states. Verify that the Secret resource was created for the node set: Verify the services were created: 10.6.1. Example OpenStackDataPlaneNodeSet CR for unprovisioned nodes The following example OpenStackDataPlaneNodeSet CR creates a node set from unprovisioned Compute nodes with some node-specific configuration. The unprovisioned Compute nodes are provisioned when the node set is created. Update the name of the OpenStackDataPlaneNodeSet CR in this example to a name that reflects the nodes in the set. The OpenStackDataPlaneNodeSet CR name must be unique, contain only lower case alphanumeric characters and - (hyphens) or . (periods), start and end with an alphanumeric character, and have a maximum length of 53 characters. Update the name in this example to a name that reflects the nodes in the set. 10.7. OpenStackDataPlaneNodeSet CR spec properties The following sections detail the OpenStackDataPlaneNodeSet CR spec properties you can configure. 10.7.1. nodeTemplate Defines the common attributes for the nodes in this OpenStackDataPlaneNodeSet . You can override these common attributes in the definition for each individual node. Table 10.1. nodeTemplate properties Field Description ansibleSSHPrivateKeySecret Name of the private SSH key secret that contains the private SSH key for connecting to nodes. Secret name format: Secret.data.ssh-privatekey For more information, see Creating an SSH authentication secret . Default: dataplane-ansible-ssh-private-key-secret managementNetwork Name of the network to use for management (SSH/Ansible). Default: ctlplane networks Network definitions for the OpenStackDataPlaneNodeSet . ansible Ansible configuration options. For more information, see ansible properties . extraMounts The files to mount into an Ansible Execution Pod. userData UserData configuration for the OpenStackDataPlaneNodeSet . networkData NetworkData configuration for the OpenStackDataPlaneNodeSet . 10.7.2. nodes Defines the node names and node-specific attributes for the nodes in this OpenStackDataPlaneNodeSet . Overrides the common attributes defined in the nodeTemplate . Table 10.2. nodes properties Field Description ansible Ansible configuration options. For more information, see ansible properties . extraMounts The files to mount into an Ansible Execution Pod. hostName The node name. managementNetwork Name of the network to use for management (SSH/Ansible). networkData NetworkData configuration for the node. networks Instance networks. userData Node-specific user data. 10.7.3. ansible Defines the group of Ansible configuration options. Table 10.3. ansible properties Field Description ansibleUser The user associated with the secret you created in Creating the data plane secrets . Default: rhel-user ansibleHost SSH host for the Ansible connection. ansiblePort SSH port for the Ansible connection. ansibleVars The Ansible variables that customize the set of nodes. You can use this property to configure any custom Ansible variable, including the Ansible variables available for each edpm-ansible role. For a complete list of Ansible variables by role, see the edpm-ansible documentation . Note The ansibleVars parameters that you can configure for an OpenStackDataPlaneNodeSet CR are determined by the services defined for the OpenStackDataPlaneNodeSet . The OpenStackDataPlaneService CRs call the Ansible playbooks from the edpm-ansible playbook collection , which include the roles that are executed as part of the data plane service. ansibleVarsFrom A list of sources to populate Ansible variables from. Values defined by an AnsibleVars with a duplicate key take precedence. For more information, see ansibleVarsFrom properties . 10.7.4. ansibleVarsFrom Defines the list of sources to populate Ansible variables from. Table 10.4. ansibleVarsFrom properties Field Description prefix An optional identifier to prepend to each key in the ConfigMap . Must be a C_IDENTIFIER. configMapRef The ConfigMap CR to select the ansibleVars from. secretRef The Secret CR to select the ansibleVars from. 10.8. Network interface configuration options Use the following tables to understand the available options for configuring network interfaces for Red Hat OpenStack Services on OpenShift (RHOSO) environments. interface vlan ovs_bridge Network interface bonding ovs_bond LACP with OVS bonding modes linux_bond routes Note Linux bridges are not supported in RHOSO. Instead, use methods such as Linux bonds and dedicated NICs for RHOSO traffic. 10.8.1. interface Defines a single network interface. The network interface name uses either the actual interface name ( eth0 , eth1 , enp0s25 ) or a set of numbered interfaces ( nic1 , nic2 , nic3 ). The network interfaces of hosts within a role do not have to be exactly the same when you use numbered interfaces such as nic1 and nic2 , instead of named interfaces such as eth0 and eno2 . For example, one host might have interfaces em1 and em2 , while another has eno1 and eno2 , but you can refer to the NICs of both hosts as nic1 and nic2 . The order of numbered interfaces corresponds to the order of named network interface types: ethX interfaces, such as eth0 , eth1 , and so on. Names appear in this format when consistent device naming is turned off in udev . enoX and emX interfaces, such as eno0 , eno1 , em0 , em1 , and so on. These are usually on-board interfaces. enX and any other interfaces, sorted alpha numerically, such as enp3s0 , enp3s1 , ens3 , and so on. These are usually add-on interfaces. The numbered NIC scheme includes only live interfaces, for example, if the interfaces have a cable attached to the switch. If you have some hosts with four interfaces and some with six interfaces, use nic1 to nic4 and attach only four cables on each host. Table 10.5. interface options Option Default Description name Name of the interface. The network interface name uses either the actual interface name ( eth0 , eth1 , enp0s25 ) or a set of numbered interfaces ( nic1 , nic2 , nic3 ). use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the interface. routes A list of routes assigned to the interface. For more information, see Section 10.8.7, "routes" . mtu 1500 The maximum transmission unit (MTU) of the connection. primary False Defines the interface as the primary interface. Required only when the interface is a member of a bond. persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the interface. ethtool_opts Set this option to "rx-flow-hash udp4 sdfn" to improve throughput when you use VXLAN on certain NICs. Example ... edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic2 ... 10.8.2. vlan Defines a VLAN. Use the VLAN ID and subnet passed from the parameters section. Table 10.6. vlan options Option Default Description vlan_id The VLAN ID. device The parent device to attach the VLAN. Use this parameter when the VLAN is not a member of an OVS bridge. For example, use this parameter to attach the VLAN to a bonded interface device. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the VLAN. routes A list of routes assigned to the VLAN. For more information, see Section 10.8.7, "routes" . mtu 1500 The maximum transmission unit (MTU) of the connection. primary False Defines the VLAN as the primary interface. persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the VLAN. Example 10.8.3. ovs_bridge Defines a bridge in Open vSwitch (OVS), which connects multiple interface , ovs_bond , and vlan objects together. The network interface type, ovs_bridge , takes a parameter name . Important The ovs_bridge interface is not recommended for control plane network traffic. The OVS bridge connects to the Networking service (neutron) server to obtain configuration data. If the OpenStack control traffic, typically the Control Plane and Internal API networks, is placed on an OVS bridge, then connectivity to the neutron server is lost whenever you upgrade OVS, or the OVS bridge is restarted by the admin user or process. This causes some downtime. If downtime is not acceptable in these circumstances, then you must place the Control group networks on a separate interface or bond rather than on an OVS bridge: You can achieve a minimal setting when you put the Internal API network on a VLAN on the provisioning interface and the OVS bridge on a second interface. To implement bonding, you need at least two bonds (four network interfaces). Place the control group on a Linux bond. If the switch does not support LACP fallback to a single interface for PXE boot, then this solution requires at least five NICs. Note If you have multiple bridges, you must use distinct bridge names other than accepting the default name of bridge_name . If you do not use distinct names, then during the converge phase, two network bonds are placed on the same bridge. Table 10.7. ovs_bridge options Option Default Description name Name of the bridge. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the bridge. routes A list of routes assigned to the bridge. For more information, see Section 10.8.7, "routes" . mtu 1500 The maximum transmission unit (MTU) of the connection. members A sequence of interface, VLAN, and bond objects that you want to use in the bridge. ovs_options A set of options to pass to OVS when creating the bridge. ovs_extra A set of options to to set as the OVS_EXTRA parameter in the network configuration file of the bridge. defroute True Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6 . persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the bridge. Example 10.8.4. Network interface bonding You can bundle multiple physical NICs together to form a single logical channel known as a bond. You can configure bonds to provide redundancy for high availability systems or increased throughput. Red Hat OpenStack Platform supports Open vSwitch (OVS) kernel bonds, OVS-DPDK bonds, and Linux kernel bonds. Table 10.8. Supported interface bonding types Bond type Type value Allowed bridge types Allowed members OVS kernel bonds ovs_bond ovs_bridge interface OVS-DPDK bonds ovs_dpdk_bond ovs_user_bridge ovs_dpdk_port Linux kernel bonds linux_bond ovs_bridge interface Important Do not combine ovs_bridge and ovs_user_bridge on the same node. 10.8.4.1. ovs_bond Defines a bond in Open vSwitch (OVS) to join two or more interfaces together. This helps with redundancy and increases bandwidth. Table 10.9. ovs_bond options Option Default Description name Name of the bond. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the bond. routes A list of routes assigned to the bond. For more information, see Section 10.8.7, "routes" . mtu 1500 The maximum transmission unit (MTU) of the connection. primary False Defines the interface as the primary interface. members A sequence of interface objects that you want to use in the bond. ovs_options A set of options to pass to OVS when creating the bond. For more information, see Table 10.10, " ovs_options parameters for OVS bonds" . ovs_extra A set of options to set as the OVS_EXTRA parameter in the network configuration file of the bond. defroute True Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6 . persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the bond. Table 10.10. ovs_options parameters for OVS bonds ovs_option Description bond_mode=balance-slb Source load balancing (slb) balances flows based on source MAC address and output VLAN, with periodic rebalancing as traffic patterns change. When you configure a bond with the balance-slb bonding option, there is no configuration required on the remote switch. The Networking service (neutron) assigns each source MAC and VLAN pair to a link and transmits all packets from that MAC and VLAN through that link. A simple hashing algorithm based on source MAC address and VLAN number is used, with periodic rebalancing as traffic patterns change. The balance-slb mode is similar to mode 2 bonds used by the Linux bonding driver. You can use this mode to provide load balancing even when the switch is not configured to use LACP. bond_mode=active-backup When you configure a bond using active-backup bond mode, the Networking service keeps one NIC in standby. The standby NIC resumes network operations when the active connection fails. Only one MAC address is presented to the physical switch. This mode does not require switch configuration, and works when the links are connected to separate switches. This mode does not provide load balancing. lacp=[active | passive | off] Controls the Link Aggregation Control Protocol (LACP) behavior. Only certain switches support LACP. If your switch does not support LACP, use bond_mode=balance-slb or bond_mode=active-backup . other-config:lacp-fallback-ab=true Set active-backup as the bond mode if LACP fails. other_config:lacp-time=[fast | slow] Set the LACP heartbeat to one second (fast) or 30 seconds (slow). The default is slow. other_config:bond-detect-mode=[miimon | carrier] Set the link detection to use miimon heartbeats (miimon) or monitor carrier (carrier). The default is carrier. other_config:bond-miimon-interval=100 If using miimon, set the heartbeat interval (milliseconds). bond_updelay=1000 Set the interval (milliseconds) that a link must be up to be activated to prevent flapping. other_config:bond-rebalance-interval=10000 Set the interval (milliseconds) that flows are rebalancing between bond members. Set this value to zero to disable flow rebalancing between bond members. Example - OVS bond Example - OVS DPDK bond In this example, a bond is created as part of an OVS user space bridge: 10.8.5. LACP with OVS bonding modes You can use Open vSwitch (OVS) bonds with the optional Link Aggregation Control Protocol (LACP). LACP is a negotiation protocol that creates a dynamic bond for load balancing and fault tolerance. Use the following table to understand support compatibility for OVS kernel and OVS-DPDK bonded interfaces in conjunction with LACP options. Important On control and storage networks, Red Hat recommends that you use Linux bonds with VLAN and LACP, because OVS bonds carry the potential for control plane disruption that can occur when OVS or the neutron agent is restarted for updates, hot fixes, and other events. The Linux bond-LACP-VLAN configuration provides NIC management without the OVS disruption potential. Table 10.11. LACP options for OVS kernel and OVS-DPDK bond modes Objective OVS bond mode Compatible LACP options Notes High availability (active-passive) active-backup active , passive , or off Increased throughput (active-active) balance-slb active , passive , or off Performance is affected by extra parsing per packet. There is a potential for vhost-user lock contention. balance-tcp active or passive As with balance-slb, performance is affected by extra parsing per packet and there is a potential for vhost-user lock contention. LACP must be configured and enabled. Set lb-output-action=true . For example: 10.8.6. linux_bond Defines a Linux bond that joins two or more interfaces together. This helps with redundancy and increases bandwidth. Ensure that you include the kernel-based bonding options in the bonding_options parameter. Table 10.12. linux_bond options Option Default Description name Name of the bond. use_dhcp False Use DHCP to get an IP address. use_dhcpv6 False Use DHCP to get a v6 IP address. addresses A list of IP addresses assigned to the bond. routes A list of routes assigned to the bond. See Section 10.8.7, "routes" . mtu 1500 The maximum transmission unit (MTU) of the connection. members A sequence of interface objects that you want to use in the bond. bonding_options A set of options when creating the bond. See bonding_options parameters for Linux bonds . defroute True Use a default route provided by the DHCP service. Only applies when you enable use_dhcp or use_dhcpv6 . persist_mapping False Write the device alias configuration instead of the system names. dhclient_args None Arguments that you want to pass to the DHCP client. dns_servers None List of DNS servers that you want to use for the bond. bonding_options parameters for Linux bonds The bonding_options parameter sets the specific bonding options for the Linux bond. See the Linux bonding examples that follow this table: bonding_options Description mode Sets the bonding mode, which in the example is 802.3ad or LACP mode. For more information about Linux bonding modes, see Configuring a network bond in Red Hat Enterprise Linux 9, Configuring and managing networking . lacp_rate Defines whether LACP packets are sent every 1 second, or every 30 seconds. updelay Defines the minimum amount of time that an interface must be active before it is used for traffic. This minimum configuration helps to mitigate port flapping outages. miimon The interval in milliseconds that is used for monitoring the port state using the MIIMON functionality of the driver. Example - Linux bond Example - Linux bond: bonding two interfaces Example - Linux bond set to active-backup mode with one VLAN Example - Linux bond on OVS bridge In this example, the bond is set to 802.3ad with LACP mode and one VLAN: 10.8.7. routes Defines a list of routes to apply to a network interface, VLAN, bridge, or bond. Table 10.13. routes options Option Default Description ip_netmask None IP and netmask of the destination network. default False Sets this route to a default route. Equivalent to setting ip_netmask: 0.0.0.0/0 . next_hop None The IP address of the router used to reach the destination network. Example - routes Section 10.9, "Example custom network interfaces for NFV" 10.9. Example custom network interfaces for NFV The following examples illustrates how you can use a template to customize network interfaces for NFV in Red Hat OpenStack Services on OpenShift (RHOSO) environments. 10.9.1. Example template - non-partitioned NIC This template example configures the RHOSO networks on a NIC that is not partitioned. 1 2 edpm-compute-n : defines the edpm_network_config_os_net_config_mappings variable to map the actual NICs. You identify each NIC by specifying the MAC address or the device name on each compute node to the NIC ID that the RHOSO os-net-config tool uses which is typically, `nic` n . 3 linux_bond : creates a control-plane Linux bond for an isolated network. In this example, a Linux bond is created with mode active-backup on nic3 and nic4 . 4 5 type: vlan : assign VLANs to Linux bonds. In this example, the VLAN ID of the internalapi and storage networks is assigned to bond-api . 6 ovs_user_bridge : set a bridge with OVS-DPDK ports. In this example, an OVS user bridge is created with a DPDK bond that has two DPDK ports that corresponds to nic7 and nic8 for the tenant network. A GENEVE tunnel is used. 7 9 11 sriov_pf : create SR-IOV VFs. In this example, an interface type of sriov_pf is configured as a physical function that the host can use. 8 10 12 numvfs : only set the number of VFs that are required. 10.9.2. Example template - partitioned NIC This template example configures the RHOSO networks on a NIC that is partitioned. This example only shows the portion of the custom resource (CR) definition where the NIC is partitioned. Additional resources Section 10.8, "Network interface configuration options" 10.10. Deploying the data plane You use the OpenStackDataPlaneDeployment CRD to configure the services on the data plane nodes and deploy the data plane. You control the execution of Ansible on the data plane by creating OpenStackDataPlaneDeployment custom resources (CRs). Each OpenStackDataPlaneDeployment CR models a single Ansible execution. When the OpenStackDataPlaneDeployment successfully completes execution, it does not automatically execute the Ansible again, even if the OpenStackDataPlaneDeployment or related OpenStackDataPlaneNodeSet resources are changed. To start another Ansible execution, you must create another OpenStackDataPlaneDeployment CR. Create an OpenStackDataPlaneDeployment (CR) that deploys each of your OpenStackDataPlaneNodeSet CRs. Procedure Create a file on your workstation named openstack_data_plane_deploy.yaml to define the OpenStackDataPlaneDeployment CR: 1 The OpenStackDataPlaneDeployment CR name must be unique, must consist of lower case alphanumeric characters, - (hyphen) or . (period), and must start and end with an alphanumeric character. Update the name in this example to a name that reflects the node sets in the deployment. In the list of services, replace nova with nova-custom-sriov , nova-custom-ovsdpdk , or both: Add all the OpenStackDataPlaneNodeSet CRs that you want to deploy. Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment. Save the openstack_data_plane_deploy.yaml deployment file. Deploy the data plane: USD oc create -f openstack_data_plane_deploy.yaml -n openstack You can view the Ansible logs while the deployment executes: USD oc get pod -l app=openstackansibleee -w USD oc logs -l app=openstackansibleee -f --max-log-requests 10 Confirm that the data plane is deployed: USD oc get openstackdataplanedeployment -n openstack Sample output NAME STATUS MESSAGE openstack-data-plane True Setup Complete Repeat the oc get command until you see the NodeSet Ready message: USD oc get openstackdataplanenodeset -n openstack Sample output NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready For information about the meaning of the returned status, see Data plane conditions and states . If the status indicates that the data plane has not been deployed, then troubleshoot the deployment. For information, see Troubleshooting the data plane creation and deployment . Map the Compute nodes to the Compute cell that they are connected to: USD oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose If you did not create additional cells, this command maps the Compute nodes to cell1 . Verification Access the remote shell for the openstackclient pod and confirm that the deployed Compute nodes are visible on the control plane: USD oc rsh -n openstack openstackclient USD openstack hypervisor list 10.11. Data plane conditions and states Each data plane resource has a series of conditions within their status subresource that indicates the overall state of the resource, including its deployment progress. For an OpenStackDataPlaneNodeSet , until an OpenStackDataPlaneDeployment has been started and finished successfully, the Ready condition is False . When the deployment succeeds, the Ready condition is set to True . A subsequent deployment sets the Ready condition to False until the deployment succeeds, when the Ready condition is set to True . Table 10.14. OpenStackDataPlaneNodeSet CR conditions Condition Description Ready "True": The OpenStackDataPlaneNodeSet CR is successfully deployed. "False": The deployment is not yet requested or has failed, or there are other failed conditions. SetupReady "True": All setup tasks for a resource are complete. Setup tasks include verifying the SSH key secret, verifying other fields on the resource, and creating the Ansible inventory for each resource. Each service-specific condition is set to "True" when that service completes deployment. You can check the service conditions to see which services have completed their deployment, or which services failed. DeploymentReady "True": The NodeSet has been successfully deployed. InputReady "True": The required inputs are available and ready. NodeSetDNSDataReady "True": DNSData resources are ready. NodeSetIPReservationReady "True": The IPSet resources are ready. NodeSetBaremetalProvisionReady "True": Bare-metal nodes are provisioned and ready. Table 10.15. OpenStackDataPlaneNodeSet status fields Status field Description Deployed "True": The OpenStackDataPlaneNodeSet CR is successfully deployed. "False": The deployment is not yet requested or has failed, or there are other failed conditions. DNSClusterAddresses CtlplaneSearchDomain Table 10.16. OpenStackDataPlaneDeployment CR conditions Condition Description Ready "True": The data plane is successfully deployed. "False": The data plane deployment failed, or there are other failed conditions. DeploymentReady "True": The data plane is successfully deployed. InputReady "True": The required inputs are available and ready. <NodeSet> Deployment Ready "True": The deployment has succeeded for the named NodeSet , indicating all services for the NodeSet have succeeded. <NodeSet> <Service> Deployment Ready "True": The deployment has succeeded for the named NodeSet and Service . Each <NodeSet> <Service> Deployment Ready specific condition is set to "True" as that service completes successfully for the named NodeSet . Once all services are complete for a NodeSet , the <NodeSet> Deployment Ready condition is set to "True". The service conditions indicate which services have completed their deployment, or which services failed and for which NodeSets . Table 10.17. OpenStackDataPlaneDeployment status fields Status field Description Deployed "True": The data plane is successfully deployed. All Services for all NodeSets have succeeded. "False": The deployment is not yet requested or has failed, or there are other failed conditions. Table 10.18. OpenStackDataPlaneService CR conditions Condition Description Ready "True": The service has been created and is ready for use. "False": The service has failed to be created. 10.12. Troubleshooting data plane creation and deployment To troubleshoot a deployment when services are not deploying or operating correctly, you can check the job condition message for the service, and you can check the logs for a node set. 10.12.1. Checking the job condition message for a service Each data plane deployment in the environment has associated services. Each of these services have a job condition message that matches the current status of the AnsibleEE job executing for that service. You can use this information to troubleshoot deployments when services are not deploying or operating correctly. Procedure Determine the name and status of all deployments: The following example output shows two deployments currently in progress: Retrieve and inspect Ansible execution jobs. The Kubernetes jobs are labelled with the name of the OpenStackDataPlaneDeployment . You can list jobs for each OpenStackDataPlaneDeployment by using the label: You can check logs by using oc logs -f job/<job-name> , for example, if you want to check the logs from the configure-network job: 10.12.1.1. Job condition messages AnsibleEE jobs have an associated condition message that indicates the current state of the service job. This condition message is displayed in the MESSAGE field of the oc get job <job_name> command output. Jobs return one of the following conditions when queried: Job not started : The job has not started. Job not found : The job could not be found. Job is running : The job is currently running. Job complete : The job execution is complete. Job error occurred <error_message> : The job stopped executing unexpectedly. The <error_message> is replaced with a specific error message. To further investigate a service that is displaying a particular job condition message, view its logs by using the command oc logs job/<service> . For example, to view the logs for the repo-setup-openstack-edpm service, use the command oc logs job/repo-setup-openstack-edpm . 10.12.2. Checking the logs for a node set You can access the logs for a node set to check for deployment issues. Procedure Retrieve pods with the OpenStackAnsibleEE label: SSH into the pod you want to check: Pod that is running: Pod that is not running: List the directories in the /runner/artifacts mount: View the stdout for the required artifact:
[ "ssh-keygen -f <key_file_name> -N \"\" -t rsa -b 4096", "oc create secret generic dataplane-ansible-ssh-private-key-secret --save-config --dry-run=client --from-file=ssh-privatekey=<key_file_name> --from-file=ssh-publickey=<key_file_name>.pub [--from-file=authorized_keys=<key_file_name>.pub] -n openstack -o yaml | oc apply -f -", "ssh-keygen -f ./nova-migration-ssh-key -t ecdsa-sha2-nistp521 -N ''", "oc create secret generic nova-migration-ssh-key --save-config --from-file=ssh-privatekey=nova-migration-ssh-key --from-file=ssh-publickey=nova-migration-ssh-key.pub -n openstack -o yaml | oc apply -f -", "apiVersion: v1 kind: Secret metadata: name: subscription-manager namespace: openstack data: username: <base64_username> password: <base64_password>", "echo -n <string> | base64", "oc create -f secret_subscription.yaml -n openstack", "oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{\"registry.redhat.io\": {\"<username>\": \"<password>\"}}'", "apiVersion: v1 kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque data: LibvirtPassword: <base64_password>", "echo -n <password> | base64", "oc apply -f secret_libvirt.yaml -n openstack", "oc describe secret dataplane-ansible-ssh-private-key-secret oc describe secret nova-migration-ssh-key oc describe secret subscription-manager oc describe secret redhat-registry oc describe secret libvirt-secret", "--- apiVersion: v1 kind: ConfigMap metadata: name: cpu-pinning-nova data: 25-cpu-pinning-nova.conf: | [DEFAULT] reserved_host_memory_mb = 4096 [compute] cpu_shared_set = 0-3,24-27 cpu_dedicated_set = 8-23,32-47 [neutron] physnets = <network_name1>, <network_name2> [neutron_physnet_ <network_name1> ] numa_nodes = <number> [neutron_physnet_ <network_name2> ] numa_nodes = <number> [neutron_tunnel] numa_nodes = <number> --- apiVersion: v1 kind: ConfigMap metadata: name: sriov-nova data: 26-sriov-nova.conf: | [libvirt] cpu_power_management=false [pci] passthrough_whitelist = {\"address\": \"0000:05:00.2\" , \"physical_network\": \"sriov-1\" , \"trusted\":\"true\"} passthrough_whitelist = {\"address\": \"0000:05:00.3\" , \"physical_network\": \"sriov-2\" , \"trusted\":\"true\"} ---", "oc create -f sriov-pinning-passthru.yaml -n openstack", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-sriov", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-sriov spec: label: dataplane-deployment-nova-custom-sriov dataSources: - configMapRef: name: cpu-pinning-nova - configMapRef: name: sriov-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundle", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-sriov spec: label: dataplane-deployment-nova-custom-sriov edpmServiceType: nova dataSources: - configMapRef: name: cpu-pinning-nova - configMapRef: name: sriov-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundle", "oc apply -f nova-custom-sriov.yaml -n openstack", "oc get openstackdataplaneservice nova-custom-sriov -o yaml -n openstack", "--- apiVersion: v1 kind: ConfigMap metadata: name: cpu-pinning-nova data: 25-cpu-pinning-nova.conf: | [DEFAULT] reserved_host_memory_mb = 4096 [compute] cpu_shared_set = 0-3,24-27 cpu_dedicated_set = 8-23,32-47 [neutron] physnets = <network_name1>, <network_name2> [neutron_physnet_ <network_name1> ] numa_nodes = <number> [neutron_physnet_ <network_name2> ] numa_nodes = <number> [neutron_tunnel] numa_nodes = <number> ---", "oc create -f dpdk-pinning.yaml -n openstack", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdk", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdk spec: label: dataplane-deployment-nova-custom-ovsdpdk edpmServiceType: nova dataSources: - configMapRef: name: cpu-pinning-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundle", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdk spec: label: dataplane-deployment-nova-custom-ovsdpdk edpmServiceType: nova dataSources: - configMapRef: name: cpu-pinning-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key playbook: osp.edpm.nova tlsCerts: default: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal caCerts: combined-ca-bundle", "oc apply -f nova-custom-ovsdpdk.yaml -n openstack", "oc get openstackdataplaneservice nova-custom-ovsdpdk -o yaml -n openstack", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane 1 namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: \"True\"", "preProvisioned: true", "nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>", "nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key> extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\"", "apiVersion: v1 kind: Secret metadata: name: subscription-manager data: username: <base64_encoded_username> password: <base64_encoded_password>", "oc create secret generic redhat-registry --from-literal edpm_container_registry_logins='{\"registry.redhat.io\": {\"<username>\": \"<password>\"}}'", "nodeTemplate: ansible: ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18-beta-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms", "nodes: edpm-compute-0: 1 hostName: edpm-compute-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com", "oc create -f openstack_preprovisioned_node_set.yaml -n openstack", "oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane False Deployment not started", "oc get secret | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50s", "oc get openstackdataplaneservice -n openstack NAME AGE configure-network 6d7h configure-os 6d6h install-os 6d6h run-os 6d6h validate-network 6d6h ovn 6d6h libvirt 6d6h nova 6d6h telemetry 6d6h", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: \"True\" networkAttachments: - ctlplane preProvisioned: true nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\" managementNetwork: ctlplane ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: [] edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.100 - name: storage subnetName: subnet1 fixedIP: 172.18.0.100 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.100 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 fixedIP: 172.17.0.101 - name: storage subnetName: subnet1 fixedIP: 172.18.0.101 - name: tenant subnetName: subnet1 fixedIP: 172.19.0.101 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane 1 namespace: openstack spec: tlsEnabled: true env: - name: ANSIBLE_FORCE_COLOR value: \"True\"", "preProvisioned: false baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: <bmh_namespace> cloudUserName: <ansible_ssh_user> bmhLabelSelector: app: <bmh_label> ctlplaneInterface: <interface> dnsSearchDomains: - osptest.openstack.org", "oc patch provisioning provisioning-configuration --type merge -p '{\"spec\":{\"watchAllNamespaces\": true }}'", "nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key>", "nodeTemplate: ansibleSSHPrivateKeySecret: <secret-key> extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\"", "nodes: edpm-compute-0: 1 hostName: edpm-compute-0 networks: 2 - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 3 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: 4 fqdn_internal_api: edpm-compute-0.example.com edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com", "oc create -f openstack_unprovisioned_node_set.yaml -n openstack", "oc get openstackdataplanenodeset -n openstack NAME STATUS MESSAGE openstack-data-plane False Deployment not started", "oc get secret -n openstack | grep openstack-data-plane dataplanenodeset-openstack-data-plane Opaque 1 3m50s", "oc get openstackdataplaneservice -n openstack NAME AGE configure-network 6d7h configure-os 6d6h install-os 6d6h run-os 6d6h validate-network 6d6h ovn 6d6h libvirt 6d6h nova 6d6h telemetry 6d6h", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-data-plane namespace: openstack spec: env: - name: ANSIBLE_FORCE_COLOR value: \"True\" networkAttachments: - ctlplane preProvisioned: false baremetalSetTemplate: deploymentSSHSecret: dataplane-ansible-ssh-private-key-secret bmhNamespace: openshift-machine-api cloudUserName: cloud-admin bmhLabelSelector: app: openstack ctlplaneInterface: enp1s0 dnsSearchDomains: - osptest.openstack.org nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret extraMounts: - extraVolType: Logs volumes: - name: ansible-logs persistentVolumeClaim: claimName: <pvc_name> mounts: - name: ansible-logs mountPath: \"/runner/artifacts\" managementNetwork: ctlplane ansible: ansibleUser: cloud-admin ansiblePort: 22 ansibleVarsFrom: - prefix: subscription_manager_ secretRef: name: subscription-manager - secretRef: name: redhat-registry ansibleVars: edpm_bootstrap_command: | subscription-manager register --username {{ subscription_manager_username }} --password {{ subscription_manager_password }} subscription-manager release --set=9.4 subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms --enable=rhoso-18.0-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms edpm_bootstrap_release_version_package: [] edpm_network_config_os_net_config_mappings: edpm-compute-0: nic1: 52:54:04:60:55:22 edpm-compute-1: nic1: 52:54:04:60:55:22 neutron_physical_bridge_name: br-ex neutron_public_interface_name: eth0 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} nodes: edpm-compute-0: hostName: edpm-compute-0 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.100 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.100 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-0.example.com bmhLabelSelector: nodeName: edpm-compute-0 edpm-compute-1: hostName: edpm-compute-1 networks: - name: ctlplane subnetName: subnet1 defaultRoute: true fixedIP: 192.168.122.101 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 ansible: ansibleHost: 192.168.122.101 ansibleUser: cloud-admin ansibleVars: fqdn_internal_api: edpm-compute-1.example.com bmhLabelSelector: nodeName: edpm-compute-1", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic2", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: members: - type: vlan device: nic{{ loop.index + 1 }} mtu: {{ lookup( vars , networks_lower[network] ~ _mtu ) }} vlan_id: {{ lookup( vars , networks_lower[network] ~ _vlan_id ) }} addresses: - ip_netmask: {{ lookup( vars , networks_lower[network] ~ _ip ) }}/{{ lookup( vars , networks_lower[network] ~ _cidr ) }} routes: {{ lookup( vars , networks_lower[network] ~ _host_routes ) }}", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-bond dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bound_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: members: - type: ovs_bond name: bond1 mtu: {{ min_viable_mtu }} ovs_options: {{ bond_interface_ovs_options }} members: - type: interface name: nic2 mtu: {{ min_viable_mtu }} primary: true - type: interface name: nic3 mtu: {{ min_viable_mtu }}", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: members: - type: ovs_user_bridge name: br-dpdk0 members: - type: ovs_dpdk_bond name: dpdkbond0 rx_queue: {{ num_dpdk_interface_rx_queues }} members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic4 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic5", "ovs-vsctl set port <bond port> other_config:lb-output-action=true", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond1 mtu: {{ min_viable_mtu }} bonding_options: \"mode=802.3ad lacp_rate=fast updelay=1000 miimon=100 xmit_hash_policy=layer3+4\" members: type: interface name: ens1f0 mtu: {{ min_viable_mtu }} primary: true type: interface name: ens1f1 mtu: {{ min_viable_mtu }}", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond1 members: - type: interface name: nic2 - type: interface name: nic3 bonding_options: \"mode=802.3ad lacp_rate=[fast|slow] updelay=1000 miimon=100\"", ". edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: linux_bond name: bond_api bonding_options: \"mode=active-backup\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: nic3 primary: true - type: interface name: nic4 - type: vlan vlan_id: get_param: InternalApiNetworkVlanID device: bond_api addresses: - ip_netmask: get_param: InternalApiIpSubnet", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-tenant use_dhcp: false mtu: 9000 members: - type: linux_bond name: bond_tenant bonding_options: \"mode=802.3ad updelay=1000 miimon=100\" use_dhcp: false dns_servers: get_param: DnsServers members: - type: interface name: p1p1 primary: true - type: interface name: p1p2 - type: vlan device: bond_tenant vlan_id: {get_param: TenantNetworkVlanID} addresses: - ip_netmask: {get_param: TenantIpSubnet}", "edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: br-tenant routes: {{ [ctlplane_host_routes] | flatten | unique }}", "apiVersion: v1 data: 25-igmp.conf: | [ovs] igmp_snooping_enable = True kind: ConfigMap metadata: name: neutron-igmp namespace: openstack --- apiVersion: v1 data: 25-cpu-pinning-nova.conf: | [DEFAULT] reserved_host_memory_mb = 4096 [compute] cpu_shared_set = \"0,20,1,21\" cpu_dedicated_set = \"8-19,28-39\" [neutron] physnets = dpdkdata1 [neutron_physnet_dpdkdata1] numa_nodes = 1 [libvirt] cpu_power_management=false kind: ConfigMap metadata: name: ovs-dpdk-sriov-cpu-pinning-nova namespace: openstack --- apiVersion: v1 data: 03-sriov-nova.conf: | [pci] device_spec = {\"address\": \"0000:05:00.2\", \"physical_network\":\"sriov-1\", \"trusted\":\"true\"} device_spec = {\"address\": \"0000:05:00.3\", \"physical_network\":\"sriov-2\", \"trusted\":\"true\"} device_spec = {\"address\": \"0000:07:00.0\", \"physical_network\":\"sriov-3\", \"trusted\":\"true\"} device_spec = {\"address\": \"0000:07:00.1\", \"physical_network\":\"sriov-4\", \"trusted\":\"true\"} kind: ConfigMap metadata: name: sriov-nova namespace: openstack --- apiVersion: v1 data: NodeRootPassword: cmVkaGF0Cg== kind: Secret metadata: name: baremetalset-password-secret namespace: openstack type: Opaque --- apiVersion: v1 data: authorized_keys: ZWNkc2Etc2hhMi1uaXN0cDUyMSBBQUFBRTJWalpITmhMWE5vWVRJdGJtbHpkSEExTWpFQUFBQUlibWx6ZEhBMU1qRUFBQUNGQkFBVFdweE5LNlNYTEo0dnh2Y0F4N0t4c3FLenI0a3pEalRpT0dQa3pyZWZnTjdVcmo2RUZPUXlBRWk5cXNnYkRVYXp0MktpdzJqc3djbG5TYW1zUDE0V2x3RkN2a1NuU1o4cTZwWGJTbGpNa3Z1R3FiVXZoSTVxTVlMTDNlRWpyU21nNDlWcTBWZkdFQmxIWUx6TGFncVBlN1FKR0NCMGlWTVk5b3N0TFdPM1NKbXVuZz09IGNpZm13X3JlcHJvZHVjZXJfa2V5Cg== ssh-privatekey: LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQkc1dmJtVUFBQUFFYm05dVpRQUFBQUFBQUFBQkFBQUFyQUFBQUJObFkyUnpZUwoxemFHRXlMVzVwYzNSd05USXhBQUFBQ0c1cGMzUndOVEl4QUFBQWhRUUFFMXFjVFN1a2x5eWVMOGIzQU1leXNiS2lzNitKCk13NDA0amhqNU02M240RGUxSzQraEJUa01nQkl2YXJJR3cxR3M3ZGlvc05vN01ISlowbXByRDllRnBjQlFyNUVwMG1mS3UKcVYyMHBZekpMN2hxbTFMNFNPYWpHQ3k5M2hJNjBwb09QVmF0Rlh4aEFaUjJDOHkyb0tqM3UwQ1JnZ2RJbFRHUGFMTFMxagp0MGlacnA0QUFBRVl0cGNtdHJhWEpyWUFBQUFUWldOa2MyRXRjMmhoTWkxdWFYTjBjRFV5TVFBQUFBaHVhWE4wY0RVeU1RCkFBQUlVRUFCTmFuRTBycEpjc25pL0c5d0RIc3JHeW9yT3ZpVE1PTk9JNFkrVE90NStBM3RTdVBvUVU1RElBU0wycXlCc04KUnJPM1lxTERhT3pCeVdkSnFhdy9YaGFYQVVLK1JLZEpueXJxbGR0S1dNeVMrNGFwdFMrRWptb3hnc3ZkNFNPdEthRGoxVwpyUlY4WVFHVWRndk10cUNvOTd0QWtZSUhTSlV4ajJpeTB0WTdkSW1hNmVBQUFBUWdHTWZobWFSblZFcnhjZ2Z6aVRpdzFnClBjYXBBV21TMHh5dDNyclhoSnExd0pRMys3ZFp0Y3l0alg5VVVuNnh0NlE1M0JTT1ZvaWR2L2pZK2krYytNVVhUZ0FBQUIKUmphV1p0ZDE5eVpYQnliMlIxWTJWeVgydGxlUUVDQXdRRkJnPT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCg== ssh-publickey: ZWNkc2Etc2hhMi1uaXN0cDUyMSBBQUFBRTJWalpITmhMWE5vWVRJdGJtbHpkSEExTWpFQUFBQUlibWx6ZEhBMU1qRUFBQUNGQkFBVFdweE5LNlNYTEo0dnh2Y0F4N0t4c3FLenI0a3pEalRpT0dQa3pyZWZnTjdVcmo2RUZPUXlBRWk5cXNnYkRVYXp0MktpdzJqc3djbG5TYW1zUDE0V2x3RkN2a1NuU1o4cTZwWGJTbGpNa3Z1R3FiVXZoSTVxTVlMTDNlRWpyU21nNDlWcTBWZkdFQmxIWUx6TGFncVBlN1FKR0NCMGlWTVk5b3N0TFdPM1NKbXVuZz09IGNpZm13X3JlcHJvZHVjZXJfa2V5Cg== kind: Secret metadata: name: dataplane-ansible-ssh-private-key-secret namespace: openstack type: Opaque --- apiVersion: v1 data: LibvirtPassword: MTIzNDU2Nzg= kind: Secret metadata: name: libvirt-secret namespace: openstack type: Opaque --- apiVersion: v1 data: ssh-privatekey: LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQkc1dmJtVUFBQUFFYm05dVpRQUFBQUFBQUFBQkFBQUFyQUFBQUJObFkyUnpZUwoxemFHRXlMVzVwYzNSd05USXhBQUFBQ0c1cGMzUndOVEl4QUFBQWhRUUFwWTlSRzV5a2pLR3p2c295dWlDZm1zakEwZkFYCmkvS0hQT3R3Zm9NZjRQZXpRSFFNOHFJZ0pGc0svaVlwNVJIWmNVQlcwVVBCNnBpazQ1L3k0QVF4bmVBQWRrN0JQbTc0dG8KSkxoVjY2U3pzV2pHR1NFdzVXVFBwVUVpaXdQMlNiL1l4dXloNWlLbUJyTE5SRWpYTEJvbjJJZWRBbEJMaC9FaGpkdFZjUwo5ZzczQ0tvQUFBRVFoeS9PODRjdnp2TUFBQUFUWldOa2MyRXRjMmhoTWkxdWFYTjBjRFV5TVFBQUFBaHVhWE4wY0RVeU1RCkFBQUlVRUFLV1BVUnVjcEl5aHM3N0tNcm9nbjVySXdOSHdGNHZ5aHp6cmNINkRIK0QzczBCMERQS2lJQ1JiQ3Y0bUtlVVIKMlhGQVZ0RkR3ZXFZcE9PZjh1QUVNWjNnQUhaT3dUNXUrTGFDUzRWZXVrczdGb3hoa2hNT1ZrejZWQklvc0Q5a20vMk1icwpvZVlpcGdheXpVUkkxeXdhSjlpSG5RSlFTNGZ4SVkzYlZYRXZZTzl3aXFBQUFBUWdEQ0lEdHFqZ0JNam8rbG1rRnhzV3NvCkxKOGxBSWF0a0ZTdDkxcGJHWWIrVFRnS0NSOGhqbXdjalNoRzFlNlRaZWZNTkc5TklzVlRYYjNjTkYvaThJTHV1UUFBQUEKNXViM1poSUcxcFozSmhkR2x2YmdFQ0F3UT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCg== ssh-publickey: ZWNkc2Etc2hhMi1uaXN0cDUyMSBBQUFBRTJWalpITmhMWE5vWVRJdGJtbHpkSEExTWpFQUFBQUlibWx6ZEhBMU1qRUFBQUNGQkFDbGoxRWJuS1NNb2JPK3lqSzZJSitheU1EUjhCZUw4b2M4NjNCK2d4L2c5N05BZEF6eW9pQWtXd3IrSmlubEVkbHhRRmJSUThIcW1LVGpuL0xnQkRHZDRBQjJUc0UrYnZpMmdrdUZYcnBMT3hhTVlaSVREbFpNK2xRU0tMQS9aSnY5akc3S0htSXFZR3NzMUVTTmNzR2lmWWg1MENVRXVIOFNHTjIxVnhMMkR2Y0lxZz09IG5vdmEgbWlncmF0aW9uCg== kind: Secret metadata: name: nova-migration-ssh-key namespace: openstack type: kubernetes.io/ssh-auth --- apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneNodeSet metadata: name: openstack-edpm namespace: openstack spec: baremetalSetTemplate: bmhLabelSelector: app: openstack cloudUserName: cloud-admin ctlplaneInterface: enp130s0f0 passwordSecret: name: baremetalset-password-secret namespace: openstack provisioningInterface: enp5s0 env: - name: ANSIBLE_FORCE_COLOR value: \"True\" networkAttachments: - ctlplane nodeTemplate: ansible: ansiblePort: 22 ansibleUser: cloud-admin ansibleVars: dns_search_domains: [] edpm_bootstrap_command: |- # root CA cd /etc/pki/ca-trust/source/anchors/ curl -LOk https://certs.corp.redhat.com/RH-IT-Root-CA.crt curl -LOk https://certs.corp.redhat.com/certs/2022-IT-Root-CA.pem update-ca-trust # install rhos-release repos dnf --nogpgcheck --repofrompath=rhos-release,http://download.devel.redhat.com/rcm-guest/puddles/OpenStack/rhos-release/ --repo=rhos-release install -y rhos-release rhos-release ceph-7.1-rhel-9 -r 9.4 # Issue #2 - edpm_bootstrap fails if we don't update container-selinux dnf update -y rpm -ivh --nosignature http://download.devel.redhat.com/rcm-guest/puddles/OpenStack/rhos-release/rhos-release-latest.noarch.rpm rhos-release ceph-7.1-rhel-9 -r 9.4 curl -o /etc/yum.repos.d/delorean.repo https://osp-trunk.hosted.upshift.rdu2.redhat.com/rhel9-osp18/current-podified/delorean.repo echo \"[osptrunk-candidate-deps]\" >> \"/etc/yum.repos.d/osptrunk-candidate-deps.repo\" echo \"name=osptrunk-candidate-deps\" >> \"/etc/yum.repos.d/osptrunk-candidate-deps.repo\" echo \"baseurl=http://download.eng.bos.redhat.com/brewroot/repos/rhos-18.0-rhel-9-trunk-candidate/latest/x86_64/\" >> \"/etc/yum.repos.d/osptrunk-candidate-deps.repo\" echo \"gpgcheck=0\" >> /etc/yum.repos.d/osptrunk-candidate-deps.repo echo \"enabled=1\" >> /etc/yum.repos.d/osptrunk-candidate-deps.repo echo \"priority=1\" >> /etc/yum.repos.d/osptrunk-candidate-deps.repo # sets up rhoso release repo echo \"[rhoso-18.0-rhel-9-nightly-compose]\" >> /etc/yum.repos.d/rhosotrunk-compose-deps.repo echo \"name=rhoso-18.0-rhel-9-nightly-compose\" >> /etc/yum.repos.d/rhosotrunk-compose-deps.repo echo \"baseurl=http://download.hosts.prod.upshift.rdu2.redhat.com/rhel-9/nightly/RHOSO/RHOSO-18.0-trunk/latest-RHOSO_TRUNK-18-RHEL-9/compose/OpenStack/x86_64/os/\" >> /etc/yum.repos.d/rhosotrunk-compose-deps.repo echo \"gpgcheck=0\" >> /etc/yum.repos.d/rhosotrunk-compose-deps.repo echo \"enabled=1\" >> /etc/yum.repos.d/rhosotrunk-compose-deps.repo echo \"priority=1\" >> /etc/yum.repos.d/rhosotrunk-compose-deps.repo echo \"includepkgs=rhoso-release-18*\" >> /etc/yum.repos.d/rhosotrunk-compose-deps.repo edpm_fips_mode: check edpm_kernel_args: default_hugepagesz=1GB hugepagesz=1G hugepages=64 iommu=pt intel_iommu=on tsx=off isolcpus=2-19,22-39 edpm_network_config_hide_sensitive_logs: false edpm_network_config_os_net_config_mappings: edpm-compute-0: 1 dmiString: system-product-name id: PowerEdge R730 nic1: eno1 nic2: eno2 nic3: enp130s0f0 nic4: enp130s0f1 nic5: enp130s0f2 nic6: enp130s0f3 nic7: enp5s0f0 nic8: enp5s0f1 nic9: enp5s0f2 nic10: enp5s0f3 nic11: enp7s0f0np0 nic12: enp7s0f1np1 edpm-compute-1: 2 dmiString: system-product-name id: PowerEdge R730 nic1: eno1 nic2: eno2 nic3: enp130s0f0 nic4: enp130s0f1 nic5: enp130s0f2 nic6: enp130s0f3 nic7: enp5s0f0 nic8: enp5s0f1 nic9: enp5s0f2 nic10: enp5s0f3 nic11: enp7s0f0np0 nic12: enp7s0f1np1 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: false - type: interface name: nic2 use_dhcp: false - type: linux_bond 3 name: bond_api use_dhcp: false bonding_options: \"mode=active-backup\" dns_servers: {{ ctlplane_dns_nameservers }} members: - type: interface name: nic3 primary: true addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: - default: true next_hop: {{ ctlplane_gateway_ip }} - type: vlan 4 vlan_id: {{ lookup( vars , networks_lower[ internalapi ] ~ _vlan_id ) }} device: bond_api addresses: - ip_netmask: {{ lookup( vars , networks_lower[ internalapi ] ~ _ip ) }}/{{ lookup( vars , networks_lower[ internalapi ] ~ _cidr ) }} - type: vlan 5 vlan_id: {{ lookup( vars , networks_lower[ storage ] ~ _vlan_id ) }} device: bond_api addresses: - ip_netmask: {{ lookup( vars , networks_lower[ storage ] ~ _ip ) }}/{{ lookup( vars , networks_lower[ storage ] ~ _cidr ) }} - type: ovs_user_bridge 6 name: br-link0 use_dhcp: false ovs_extra: \"set port br-link0 tag={{ lookup( vars , networks_lower[ tenant ] ~ _vlan_id ) }}\" addresses: - ip_netmask: {{ lookup( vars , networks_lower[ tenant ] ~ _ip ) }}/{{ lookup( vars , networks_lower[ tenant ] ~ _cidr ) }} mtu: {{ lookup( vars , networks_lower[ tenant ] ~ _mtu ) }} members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 2 ovs_extra: \"set port dpdkbond0 bond_mode=balance-slb\" members: - type: ovs_dpdk_port name: dpdk0 members: - type: interface name: nic7 - type: ovs_dpdk_port name: dpdk1 members: - type: interface name: nic8 - type: ovs_user_bridge name: br-dpdk0 mtu: 9000 use_dhcp: false members: - type: ovs_dpdk_bond name: dpdkbond1 mtu: 9000 rx_queue: 3 ovs_options: \"bond_mode=balance-tcp lacp=active other_config:lacp-time=fast other-config:lacp-fallback-ab=true other_config:lb-output-action=true\" members: - type: ovs_dpdk_port name: dpdk2 members: - type: interface name: nic5 - type: ovs_dpdk_port name: dpdk3 members: - type: interface name: nic6 - type: ovs_user_bridge name: br-dpdk1 mtu: 9000 use_dhcp: false members: - type: ovs_dpdk_port name: dpdk4 mtu: 9000 rx_queue: 3 members: - type: interface name: nic4 - type: sriov_pf 7 name: nic9 numvfs: 10 8 mtu: 9000 use_dhcp: false promisc: true - type: sriov_pf name: nic10 numvfs: 10 mtu: 9000 use_dhcp: false promisc: true - type: sriov_pf 9 name: nic11 numvfs: 5 10 mtu: 9000 use_dhcp: false promisc: true - type: sriov_pf 11 name: nic12 numvfs: 5 12 mtu: 9000 use_dhcp: false promisc: true edpm_neutron_sriov_agent_SRIOV_NIC_physical_device_mappings: sriov-1:enp5s0f2,sriov-2:enp5s0f3,sriov-3:enp7s0f0np0,sriov-4:enp7s0f1np1 edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false edpm_nova_libvirt_qemu_group: hugetlbfs edpm_ovn_bridge_mappings: - dpdkmgmt:br-link0 - dpdkdata0:br-dpdk0 - dpdkdata1:br-dpdk1 edpm_ovs_dpdk_lcore_list: 0,20,1,21 edpm_ovs_dpdk_memory_channels: \"4\" edpm_ovs_dpdk_pmd_auto_lb: \"true\" edpm_ovs_dpdk_pmd_core_list: 2,3,4,5,6,7,22,23,24,25,26,27 edpm_ovs_dpdk_pmd_improvement_threshold: \"25\" edpm_ovs_dpdk_pmd_load_threshold: \"70\" edpm_ovs_dpdk_pmd_rebal_interval: \"2\" edpm_ovs_dpdk_socket_memory: 4096,4096 edpm_ovs_dpdk_vhost_postcopy_support: \"true\" edpm_selinux_mode: enforcing edpm_sshd_allowed_ranges: - 192.168.122.0/24 edpm_sshd_configure_firewall: true edpm_tuned_isolated_cores: 2-19,22-39 edpm_tuned_profile: cpu-partitioning-powersave enable_debug: false gather_facts: false neutron_physical_bridge_name: br-access neutron_public_interface_name: nic1 service_net_map: nova_api_network: internalapi nova_libvirt_network: internalapi timesync_ntp_servers: - hostname: clock.redhat.com ansibleSSHPrivateKeySecret: dataplane-ansible-ssh-private-key-secret managementNetwork: ctlplane networks: - defaultRoute: true name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 nodes: edpm-compute-0: hostName: compute-0 edpm-compute-1: hostName: compute-1 preProvisioned: false services: - bootstrap - download-cache - reboot-os - configure-ovs-dpdk - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn - neutron-ovn-igmp - neutron-metadata - neutron-sriov - libvirt - nova-custom-ovsdpdksriov - telemetry --- apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: neutron-ovn-igmp namespace: openstack spec: caCerts: combined-ca-bundle dataSources: - configMapRef: name: neutron-igmp - secretRef: name: neutron-ovn-agent-neutron-config edpmServiceType: neutron-ovn label: neutron-ovn-igmp playbook: osp.edpm.neutron_ovn tlsCerts: default: contents: - dnsnames - ips issuer: osp-rootca-issuer-ovn keyUsages: - digital signature - key encipherment - client auth networks: - ctlplane --- apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneService metadata: name: nova-custom-ovsdpdksriov namespace: openstack spec: caCerts: combined-ca-bundle dataSources: - configMapRef: name: ovs-dpdk-sriov-cpu-pinning-nova - configMapRef: name: sriov-nova - secretRef: name: nova-cell1-compute-config - secretRef: name: nova-migration-ssh-key edpmServiceType: nova label: nova-custom-ovsdpdksriov playbook: osp.edpm.nova tlsCerts: default: contents: - dnsnames - ips issuer: osp-rootca-issuer-internal networks: - ctlplane", "edpm_network_config_os_net_config_mappings: dellr750: dmiString: system-product-name id: PowerEdge R750 nic1: eno8303 nic2: ens1f0 nic3: ens1f1 nic4: ens1f2 nic5: ens1f3 nic6: ens2f0np0 nic7: ens2f1np1 edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {{ mtu_list.append(lookup( vars , networks_lower[network] ~ _mtu )) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: interface name: nic1 use_dhcp: false - type: interface name: nic2 use_dhcp: false addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: - default: true next_hop: {{ ctlplane_gateway_ip }} - type: sriov_pf name: nic3 mtu: 9000 numvfs: 5 use_dhcp: false defroute: false nm_controlled: true hotplug: true - type: sriov_pf name: nic4 mtu: 9000 numvfs: 5 use_dhcp: false defroute: false nm_controlled: true hotplug: true - type: linux_bond name: bond_api use_dhcp: false bonding_options: \"mode=active-backup\" dns_servers: {{ ctlplane_dns_nameservers }} members: - type: sriov_vf device: nic3 vfid: 0 vlan_id: {{ lookup( vars , networks_lower[ internalapi ] ~ _vlan_id ) }} - type: sriov_vf device: nic4 vfid: 0 vlan_id: {{ lookup( vars , networks_lower[ internalapi ] ~ _vlan_id ) }} addresses: - ip_netmask: {{ lookup( vars , networks_lower[ internalapi ] ~ _ip ) }}/{{ lookup( vars , networks_lower[ internalapi ] ~ _cidr ) }} - type: linux_bond name: storage_bond use_dhcp: false bonding_options: \"mode=active-backup\" dns_servers: {{ ctlplane_dns_nameservers }} members: - type: sriov_vf device: nic3 vfid: 1 vlan_id: {{ lookup( vars , networks_lower[ storage ] ~ _vlan_id ) }} - type: sriov_vf device: nic4 vfid: 1 vlan_id: {{ lookup( vars , networks_lower[ storage ] ~ _vlan_id ) }} addresses: - ip_netmask: {{ lookup( vars , networks_lower[ storage ] ~ _ip ) }}/{{ lookup( vars , networks_lower[ storage ] ~ _cidr ) }} - type: linux_bond name: mgmtst_bond use_dhcp: false bonding_options: \"mode=active-backup\" dns_servers: {{ ctlplane_dns_nameservers }} members: - type: sriov_vf device: nic3 vfid: 2 vlan_id: {{ lookup( vars , networks_lower[ storagemgmt ] ~ _vlan_id ) }} - type: sriov_vf device: nic4 vfid: 2 vlan_id: {{ lookup( vars , networks_lower[ storagemgmt ] ~ _vlan_id ) }} addresses: - ip_netmask: {{ lookup( vars , networks_lower[ storagemgmt ] ~ _ip ) }}/{{ lookup( vars , networks_lower[ storagemgmt ] ~ _cidr ) }} - type: ovs_user_bridge name: br-link0 use_dhcp: false mtu: 9000 ovs_extra: \"set port br-link0 tag={{ lookup( vars , networks_lower[ tenant ] ~ _vlan_id ) }}\" addresses: - ip_netmask: {{ lookup( vars , networks_lower[ tenant ] ~ _ip ) }}/{{ lookup( vars , networks_lower[ tenant ] ~ _cidr ) }} members: - type: ovs_dpdk_bond name: dpdkbond0 mtu: 9000 rx_queue: 1 members: - type: ovs_dpdk_port name: dpdk0 members: - type: sriov_vf device: nic3 vfid: 3 - type: ovs_dpdk_port name: dpdk1 members: - type: sriov_vf device: nic4 vfid: 3 - type: ovs_user_bridge name: br-dpdk0 use_dhcp: false mtu: 9000 rx_queue: 1 members: - type: ovs_dpdk_port name: dpdk2 members: - type: interface name: nic5 - type: sriov_pf name: nic6 mtu: 9000 numvfs: 5 use_dhcp: false defroute: false - type: sriov_pf name: nic7 mtu: 9000 numvfs: 5 use_dhcp: false defroute: false", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-data-plane 1", "spec: services: - bootstrap - download-cache - reboot-os - configure-ovs-dpdk - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - install-certs - ovn - neutron-ovn-igmp - neutron-metadata - neutron-sriov - libvirt - nova-custom-sriov - nova-custom-ovsdpdk - telemetry nodeSets:", "spec: nodeSets: - openstack-data-plane - <nodeSet_name> - - <nodeSet_name> services:", "oc create -f openstack_data_plane_deploy.yaml -n openstack", "oc get pod -l app=openstackansibleee -w oc logs -l app=openstackansibleee -f --max-log-requests 10", "oc get openstackdataplanedeployment -n openstack", "NAME STATUS MESSAGE openstack-data-plane True Setup Complete", "oc get openstackdataplanenodeset -n openstack", "NAME STATUS MESSAGE openstack-data-plane True NodeSet Ready", "oc rsh nova-cell0-conductor-0 nova-manage cell_v2 discover_hosts --verbose", "oc rsh -n openstack openstackclient openstack hypervisor list", "oc get openstackdataplanedeployment", "oc get openstackdataplanedeployment NAME NODESETS STATUS MESSAGE edpm-compute [\"openstack-edpm-ipam\"] False Deployment in progress", "oc get job -l openstackdataplanedeployment=edpm-compute NAME STATUS COMPLETIONS DURATION AGE bootstrap-edpm-compute-openstack-edpm-ipam Complete 1/1 78s 25h configure-network-edpm-compute-openstack-edpm-ipam Complete 1/1 37s 25h configure-os-edpm-compute-openstack-edpm-ipam Complete 1/1 66s 25h download-cache-edpm-compute-openstack-edpm-ipam Complete 1/1 64s 25h install-certs-edpm-compute-openstack-edpm-ipam Complete 1/1 46s 25h install-os-edpm-compute-openstack-edpm-ipam Complete 1/1 57s 25h libvirt-edpm-compute-openstack-edpm-ipam Complete 1/1 2m37s 25h neutron-metadata-edpm-compute-openstack-edpm-ipam Complete 1/1 61s 25h nova-edpm-compute-openstack-edpm-ipam Complete 1/1 3m20s 25h ovn-edpm-compute-openstack-edpm-ipam Complete 1/1 78s 25h run-os-edpm-compute-openstack-edpm-ipam Complete 1/1 33s 25h ssh-known-hosts-edpm-compute Complete 1/1 19s 25h telemetry-edpm-compute-openstack-edpm-ipam Complete 1/1 2m5s 25h validate-network-edpm-compute-openstack-edpm-ipam Complete 1/1 16s 25h", "oc logs -f jobs/configure-network-edpm-compute-openstack-edpm-ipam | tail -n2 PLAY RECAP ********************************************************************* edpm-compute-0 : ok=22 changed=0 unreachable=0 failed=0 skipped=17 rescued=0 ignored=0", "oc get pods -l app=openstackansibleee configure-network-edpm-compute-j6r4l 0/1 Completed 0 3m36s validate-network-edpm-compute-6g7n9 0/1 Pending 0 0s validate-network-edpm-compute-6g7n9 0/1 ContainerCreating 0 11s validate-network-edpm-compute-6g7n9 1/1 Running 0 13s", "oc rsh validate-network-edpm-compute-6g7n9", "oc debug configure-network-edpm-compute-j6r4l", "ls /runner/artifacts configure-network-edpm-compute validate-network-edpm-compute", "cat /runner/artifacts/configure-network-edpm-compute/stdout" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_network_functions_virtualization_environment/assembly_create-data-plane-sriov-dpdk_rhoso-nfv
Chapter 19. Using the mount Command
Chapter 19. Using the mount Command On Linux, UNIX, and similar operating systems, file systems on different partitions and removable devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount point ) in the directory tree, and then detached again. To attach or detach a file system, use the mount or umount command respectively. This chapter describes the basic use of these commands, as well as some advanced topics, such as moving a mount point or creating shared subtrees. 19.1. Listing Currently Mounted File Systems To display all currently attached file systems, use the following command with no additional arguments: This command displays the list of known mount points. Each line provides important information about the device name, the file system type, the directory in which it is mounted, and relevant mount options in the following form: device on directory type type ( options ) The findmnt utility, which allows users to list mounted file systems in a tree-like form, is also available from Red Hat Enterprise Linux 6.1. To display all currently attached file systems, run the findmnt command with no additional arguments: 19.1.1. Specifying the File System Type By default, the output of the mount command includes various virtual file systems such as sysfs and tmpfs . To display only the devices with a certain file system type, provide the -t option: Similarly, to display only the devices with a certain file system using the findmnt command: For a list of common file system types, see Table 19.1, "Common File System Types" . For an example usage, see Example 19.1, "Listing Currently Mounted ext4 File Systems" . Example 19.1. Listing Currently Mounted ext4 File Systems Usually, both / and /boot partitions are formatted to use ext4 . To display only the mount points that use this file system, use the following command: To list such mount points using the findmnt command, type:
[ "mount", "findmnt", "mount -t type", "findmnt -t type", "mount -t ext4 /dev/sda2 on / type ext4 (rw) /dev/sda1 on /boot type ext4 (rw)", "findmnt -t ext4 TARGET SOURCE FSTYPE OPTIONS / /dev/sda2 ext4 rw,realtime,seclabel,barrier=1,data=ordered /boot /dev/sda1 ext4 rw,realtime,seclabel,barrier=1,data=ordered" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-mount-command
OpenShift sandboxed containers
OpenShift sandboxed containers OpenShift Container Platform 4.14 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/openshift_sandboxed_containers/index
2.5. OSA-Express5s Cards Support in qethqoat
2.5. OSA-Express5s Cards Support in qethqoat Support for OSA-Express5s cards has been added to the qethqoat tool, part of the s390utils package. This enhancement extends the serviceability of network and card setups for OSA-Express5s cards, and is included as a Technology Preview with Red Hat Enterprise Linux 7.1 on IBM System z.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/TP-sect-Hardware_Enablement-OSA-Express5s
1.3.6. Committing Changes
1.3.6. Committing Changes To share your changes with others and commit them to a CVS repository, change to the directory with its working copy and run the following command: cvs commit [ -m " commit message " ] Note that unless you specify the commit message on the command line, CVS opens an external text editor ( vi by default) for you to write it. For information on how to determine which editor to start, see Section 1.3.1, "Installing and Configuring CVS" . Example 1.22. Committing changes to a CVS repository Imagine that the directory with your working copy of a CVS repository has the following contents: In this working copy, ChangeLog is scheduled for addition to the CVS repository, Makefile already is under revision control and contains local changes, and the TODO file, which is also under revision control, has been scheduled for removal and is no longer present in the working copy. To commit these changes to the CVS repository, type:
[ "project]USD ls AUTHORS ChangeLog CVS doc INSTALL LICENSE Makefile README src", "project]USD cvs commit -m \"Updated the makefile.\" cvs commit: Examining . cvs commit: Examining doc RCS file: /home/john/cvsroot/project/ChangeLog,v done Checking in ChangeLog; /home/john/cvsroot/project/ChangeLog,v <-- ChangeLog initial revision: 1.1 done Checking in Makefile; /home/john/cvsroot/project/Makefile,v <-- Makefile new revision: 1.2; previous revision: 1.1 done Removing TODO; /home/john/cvsroot/project/TODO,v <-- TODO new revision: delete; previous revision: 1.1.1.1 done" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/sect-revision_control_systems-cvs-commit
Chapter 3. Logging 6.1
Chapter 3. Logging 6.1 3.1. Logging 6.1 3.1.1. Logging 6.1.1 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1 . 3.1.1.1. New Features and Enhancements With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. ( LOG-6420 ) 3.1.1.2. Bug Fixes Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.] . With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size, max_line_bytes , is 3145728 bytes. The maximum number of bytes read during a read cycle, max_read_bytes , is 262144 bytes. ( LOG-6379 ) Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. ( LOG-6383 ) Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. ( LOG-6405 ) Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. ( LOG-6407 ) Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. ( LOG-6449 ) Before this update, the ValidLokistackOTLPOutputs condition appeared in the status of the ClusterLogForwarder custom resource even when the output type is not LokiStack . With this update, the ValidLokistackOTLPOutputs condition is removed, and the validation messages for the existing output conditions are corrected. ( LOG-6469 ) Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. ( LOG-6484 ) Before this update, the must-gather script of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, the must-gather script is fixed, and the LokiStack data is gathered reliably. ( LOG-6498 ) Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. ( LOG-6533 ) 3.1.1.3. CVEs CVE-2019-12900 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-10963 CVE-2024-50602 3.1.2. Logging 6.1.0 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0 . 3.1.2.1. New Features and Enhancements 3.1.2.1.1. Log Collection This enhancement adds the source iostream to the attributes sent from collected container logs. The value is set to either stdout or stderr based on how the collector received it. ( LOG-5292 ) With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster's specific needs and specifications. ( LOG-6072 ) With this update, users can now set the syslog output delivery mode of the ClusterLogForwarder CR to either AtLeastOnce or AtMostOnce. ( LOG-6355 ) 3.1.2.1.2. Log Storage With this update, the new 1x.pico LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). ( LOG-5939 ) 3.1.2.2. Technology Preview Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, OpenTelemetry logs can now be forwarded using the OTel (OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add the observability.openshift.io/tech-preview-otlp-output: "enabled" annotation to your ClusterLogForwarder configuration. For additional configuration information, see OTLP Forwarding . With this update, a dataModel field has been added to the lokiStack output specification. Set the dataModel to Otel to configure log forwarding using the OpenTelemetry data format. The default is set to Viaq . For information about data mapping see OTLP Specification . 3.1.2.3. Bug Fixes None. 3.1.2.4. CVEs CVE-2024-6119 CVE-2024-6232 3.2. Logging 6.1 context: logging-6x-6.1 The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. 3.2.1. Inputs and outputs Inputs specify the sources of logs to be forwarded. Logging provides built-in input types: application , receiver , infrastructure , and audit , which select logs from different parts of your cluster. You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. 3.2.2. Receiver input type The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog . The ReceiverSpec defines the configuration for a receiver input. 3.2.3. Pipelines and filters Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. Filters can be used to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. 3.2.4. Operator behavior The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: When set to Managed (default), the operator actively manages the logging resources to match the configuration defined in the spec. When set to Unmanaged , the operator does not take any action, allowing you to manually manage the logging components. 3.2.5. Validation Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. 3.2.6. Quick start OpenShift Logging supports two data models: ViaQ (General Availability) OpenTelemetry (Technology Preview) You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder . ViaQ is the default data model when forwarding logs to LokiStack. Note In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. 3.2.6.1. Quick start with ViaQ To use the default ViaQ data model, follow these steps: Prerequisites Cluster administrator permissions Procedure Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Note Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. Create a service account for the collector: USD oc create sa collector -n openshift-logging Allow the collector's service account to write data to the LokiStack CR: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector Note The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. Allow the collector's service account to collect logs: USD oc project openshift-logging USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector Note The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. Create a UIPlugin CR to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack Note The dataModel field is optional and left unset ( dataModel: "" ) by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. Verification Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console. 3.2.6.2. Quick start with OpenTelemetry Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: Prerequisites Cluster administrator permissions Procedure Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Note Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". Create a service account for the collector: USD oc create sa collector -n openshift-logging Allow the collector's service account to write data to the LokiStack CR: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector Note The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. Allow the collector's service account to collect logs: USD oc project openshift-logging USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector Note The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. Create a UIPlugin CR to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: "enabled" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp 1 Use the annotation to enable the Otel data model, which is a Technology Preview feature. 2 Define the output type as lokiStack . 3 Specifies the OpenTelemetry data model. Note You cannot use lokiStack.labelKeys when dataModel is Otel . To achieve similar functionality when dataModel is Otel , refer to "Configuring LokiStack for OTLP data ingestion". Verification Verify that OTLP is functioning correctly by going to Observe OpenShift Logging LokiStack Writes in the OpenShift web console, and checking Distributor - Structured Metadata . 3.3. Configuring log forwarding The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. Key Functions of the ClusterLogForwarder Selects log messages using inputs Forwards logs to external destinations using outputs Filters, transforms, and drops log messages using filters Defines log forwarding pipelines connecting inputs, filters and outputs 3.3.1. Setting up log collection This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder . This was not required in releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. The Red Hat OpenShift Logging Operator provides collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. Setup log collection by binding the required cluster roles to your service account. 3.3.1.1. Legacy service accounts To use the existing legacy service account logcollector , create the following ClusterRoleBinding : USD oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector Additionally, create the following ClusterRoleBinding if collecting audit logs: USD oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector 3.3.1.2. Creating service accounts Prerequisites The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. You have administrator permissions. Procedure Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. Bind the appropriate cluster roles to the service account: Example binding command USD oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> 3.3.1.2.1. Cluster Role Binding for your Service Account The role_binding.yaml file binds the ClusterLogging operator's ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8 1 roleRef: References the ClusterRole to which the binding applies. 2 apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. 3 kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. 4 name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. 5 subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. 6 kind: Specifies that the subject is a ServiceAccount. 7 Name: The name of the ServiceAccount being granted the permissions. 8 namespace: Indicates the namespace where the ServiceAccount is located. 3.3.1.2.2. Writing application logs The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system. 3.3.1.2.3. Writing audit logs The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 1 rules: Defines the permissions granted by this ClusterRole. 2 2 apiGroups: Specifies the API group loki.grafana.com. 3 3 loki.grafana.com: The API group responsible for Loki logging resources. 4 4 resources: Refers to the resource type this role manages, in this case, audit. 5 5 audit: Specifies that the role manages audit logs within Loki. 6 6 resourceNames: Defines the specific resources that the role can access. 7 7 logs: Refers to the logs that can be managed under this role. 8 8 verbs: The actions allowed on the resources. 9 9 create: Grants permission to create new audit logs. 3.3.1.2.4. Writing infrastructure logs The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. Sample YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Specifies the API group for Loki-related resources. 3 loki.grafana.com: The API group managing the Loki logging system. 4 resources: Defines the resource type that this role can interact with. 5 infrastructure: Refers to infrastructure-related resources that this role manages. 6 resourceNames: Specifies the names of resources this role can manage. 7 logs: Refers to the log resources related to infrastructure. 8 verbs: The actions permitted by this role. 9 create: Grants permission to create infrastructure logs in the Loki system. 3.3.1.2.5. ClusterLogForwarder editor role The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Refers to the OpenShift-specific API group 3 obervability.openshift.io: The API group for managing observability resources, like logging. 4 resources: Specifies the resources this role can manage. 5 clusterlogforwarders: Refers to the log forwarding resources in OpenShift. 6 verbs: Specifies the actions allowed on the ClusterLogForwarders. 7 create: Grants permission to create new ClusterLogForwarders. 8 delete: Grants permission to delete existing ClusterLogForwarders. 9 get: Grants permission to retrieve information about specific ClusterLogForwarders. 10 list: Allows listing all ClusterLogForwarders. 11 patch: Grants permission to partially modify ClusterLogForwarders. 12 update: Grants permission to update existing ClusterLogForwarders. 13 watch: Grants permission to monitor changes to ClusterLogForwarders. 3.3.2. Modifying log level in collector To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace , debug , info , warn , error , and off . Example log level annotation apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug # ... 3.3.3. Managing the Operator The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: Managed (default) The operator will drive the logging resources to match the desired state in the CLF spec. Unmanaged The operator will not take any action related to the logging components. This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged . 3.3.4. Structure of the ClusterLogForwarder The CLF has a spec section that contains the following key components: Inputs Select log messages to be forwarded. Built-in input types application , infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. Outputs Define destinations to forward logs to. Each output has a unique name and type-specific configuration. Pipelines Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. Filters Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. 3.3.4.1. Inputs Inputs are configured in an array under spec.inputs . There are three built-in input types: application Selects logs from all application containers, excluding those in infrastructure namespaces. infrastructure Selects logs from nodes and from infrastructure components running in the following namespaces: default kube openshift Containing the kube- or openshift- prefix audit Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. 3.3.4.2. Outputs Outputs are configured in an array under spec.outputs . Each output must have a unique name and a type. Supported types are: azureMonitor Forwards logs to Azure Monitor. cloudwatch Forwards logs to AWS CloudWatch. elasticsearch Forwards logs to an external Elasticsearch instance. googleCloudLogging Forwards logs to Google Cloud Logging. http Forwards logs to a generic HTTP endpoint. kafka Forwards logs to a Kafka broker. loki Forwards logs to a Loki logging backend. lokistack Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy otlp Forwards logs using the OpenTelemetry Protocol. splunk Forwards logs to Splunk. syslog Forwards logs to an external syslog server. Each output type has its own configuration fields. 3.3.5. Configuring OTLP output Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Procedure Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: "enabled" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp 1 Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. 2 This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. Note The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. 3.3.5.1. Pipelines Pipelines are configured in an array under spec.pipelines . Each pipeline must have a unique name and consists of: inputRefs Names of inputs whose logs should be forwarded to this pipeline. outputRefs Names of outputs to send logs to. filterRefs (optional) Names of filters to apply. The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. 3.3.5.2. Filters Filters are configured in an array under spec.filters . They can match incoming log messages based on the value of structured fields and modify or drop them. Administrators can configure the following types of filters: 3.3.5.3. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10) To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters . Example ClusterLogForwarder CR apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name> 3.3.5.3.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. The collector supports the following languages: Java JS Ruby Python Golang PHP Dart 3.3.5.4. Configuring content filters to drop unwanted log records When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. Procedure Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ... 1 Specifies the type of filter. The drop filter drops log records that match the filter configuration. 2 Specifies configuration options for applying the drop filter. 3 Specifies the configuration for tests that are used to evaluate whether a log record is dropped. If all the conditions specified for a test are true, the test passes and the log record is dropped. When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. 4 Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. 5 Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 6 Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 7 Specifies the pipeline that the drop filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional examples The following additional example shows how you can configure the drop filter to only keep higher priority log records: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ... In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: "^open" - test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ... 3.3.5.5. Overview of API audit filter OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: None : The event is dropped. Metadata : Audit metadata is included, request and response bodies are removed. Request : Audit metadata and the request body are included, the response body is removed. RequestResponse : All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy , while providing the following additional functions: Wildcards Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication . Resource \*/status matches Pod/status or Deployment/status . Default Rules Events that do not match any rule in the policy are filtered as follows: Read-only system events such as get , list , and watch are dropped. Service account write events that occur within the same namespace as the service account are dropped. All other events are forwarded, subject to any configured rate limits. To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. Omit Response Codes A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429] . If the value is an empty list, [] , then no status codes are omitted. The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. Note You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. Example audit policy apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 2 The name of your audit policy. 3.3.5.6. Filtering application logs at input by including the label expressions or a matching label key and values You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. Procedure Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 type: application # ... 1 Specifies the label key to match. 2 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 3 Specifies an array of string values. If the operator value is either Exists or DoesNotExist , the value array must be empty. 4 Specifies an exact key or value mapping. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 3.3.5.7. Configuring content filters to prune log records When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. Procedure Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: Important If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ... 1 Specify the type of filter. The prune filter prunes log records by configured fields. 2 Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . 3 Optional: Any fields that are specified in this array are removed from the log record. 4 Optional: Any fields that are not specified in this array are removed from the log record. 5 Specify the pipeline that the prune filter is applied to. Note The filters exempts the log_type , .log_source , and .message fields. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 3.3.6. Filtering the audit and infrastructure log inputs by source You can define the list of audit and infrastructure sources to collect the logs by using the input selector. Procedure Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ... 1 Specifies the list of infrastructure sources to collect. The valid sources include: node : Journal log from the node container : Logs from the workloads deployed in the namespaces 2 Specifies the list of audit sources to collect. The valid sources include: kubeAPI : Logs from the Kubernetes API servers openshiftAPI : Logs from the OpenShift API servers auditd : Logs from a node auditd service ovn : Logs from an open virtual network service Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 3.3.7. Filtering application logs at input by including or excluding the namespace or container name You can include or exclude the application logs based on the namespace and container name by using the input selector. Procedure Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 type: application # ... 1 Specifies that the logs are only collected from these namespaces. 2 Specifies that the logs are only collected from these containers. 3 Specifies the pattern of namespaces to ignore when collecting the logs. 4 Specifies the set of containers to ignore when collecting the logs. Note The excludes field takes precedence over the includes field. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 3.4. Storing logs with LokiStack You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. Important For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. 3.4.1. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. Important It is not possible to change the number 1x for the deployment size. Table 3.1. Loki sizing 1x.demo 1x.pico [6.1+ only] 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 50GB/day 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 2 Total CPU requests None 7 vCPUs 14 vCPUs 34 vCPUs 54 vCPUs Total CPU requests if using the ruler None 8 vCPUs 16 vCPUs 42 vCPUs 70 vCPUs Total memory requests None 17Gi 31Gi 67Gi 139Gi Total memory requests if using the ruler None 18Gi 35Gi 83Gi 171Gi Total disk requests 40Gi 590Gi 430Gi 430Gi 590Gi Total disk requests if using the ruler 80Gi 910Gi 750Gi 750Gi 910Gi 3.4.2. Prerequisites You have installed the Loki Operator by using the CLI or web console. You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder . The serviceAccount is assigned collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles. 3.4.3. Core Setup and Configuration Role-based access controls, basic monitoring, and pod placement to deploy Loki. 3.4.4. Authorizing LokiStack rules RBAC permissions Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. The following cluster roles for alerting and recording rules are available for LokiStack: Rule name Description alertingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group. alertingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. alertingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete AlertingRule resources. alertingrules.loki.grafana.com-v1-view Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. recordingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group. recordingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. recordingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete RecordingRule resources. recordingrules.loki.grafana.com-v1-view Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. 3.4.4.1. Examples To apply cluster roles for a user, you must bind an existing cluster role to a specific username. Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: Example cluster role binding command for alerting rule CRUD permissions in a specific namespace USD oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username> The following command gives the specified user administrator permissions for alerting rules in all namespaces: Example cluster role binding command for administrator permissions USD oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username> 3.4.5. Creating a log-based alerting rule with Loki The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. If an AlertingRule CR includes an invalid LogQL expr , it is an invalid alerting rule. If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. If none of the above applies, an alerting rule is considered valid. Table 3.2. AlertingRule definitions Tenant type Valid namespaces for AlertingRule CRs application <your_application_namespace> audit openshift-logging infrastructure openshift-/* , kube-/\* , default Procedure Create an AlertingRule custom resource (CR): Example infrastructure AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 AlertingRule CRs for infrastructure tenants are only supported in the openshift-* , kube-\* , or default namespaces. 4 The value for kubernetes_namespace_name: must match the value for metadata.namespace . 5 The value of this mandatory field must be critical , warning , or info . 6 This field is mandatory. 7 This field is mandatory. Example application AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 Value for kubernetes_namespace_name: must match the value for metadata.namespace . 4 The value of this mandatory field must be critical , warning , or info . 5 The value of this mandatory field is a summary of the rule. 6 The value of this mandatory field is a detailed description of the rule. Apply the AlertingRule CR: USD oc apply -f <filename>.yaml 3.4.6. Configuring Loki to tolerate memberlist creation failure In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 3.4.7. Enabling stream-based retention with Loki You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Schema v13 is recommended. Procedure Create a LokiStack CR: Enable stream-based retention globally as shown in the following example: Example global stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream.spec: limits: Enable stream-based retention per-tenant basis as shown in the following example: Example per-tenant stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml 3.4.8. Loki pod placement You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. Example LokiStack with node selectors apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ... 1 Specifies the component pod type that applies to the node selector. 2 Specifies the pods that are moved to nodes containing the defined label. Example LokiStack CR with node selectors and tolerations apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ... To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: USD oc explain lokistack.spec.template Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ... For more detailed information, you can add a specific field: USD oc explain lokistack.spec.template.compactor Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ... 3.4.8.1. Enhanced Reliability and Performance Configurations to ensure Loki's reliability and efficiency in production. 3.4.8.2. Enabling authentication to cloud-based log stores using short-lived tokens Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Procedure Use one of the following options to enable authentication: If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Example Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> Example AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 3.4.8.3. Configuring Loki to tolerate node failure The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. 3.4.8.4. LokiStack behavior during cluster restarts When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. 3.4.8.5. Advanced Deployment and Scalability Specialized configurations for high availability, scalability, and error handling. 3.4.8.6. Zone aware data replication The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small , 1x.small , or 1x.medium , the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 3.4.8.7. Recovering Loki pods from failed zones In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: USD oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: USD oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: USD oc delete pvc <pvc_name> -n openshift-logging Delete the pod(s) by running the following command: USD oc delete pod <pod_name> -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 3.4.8.7.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. USD oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging 3.4.8.8. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true Example Fluentd error message 2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk="604251225bf5378ed1567231a1c03b8b" error_class=Fluent::Plugin::LokiOutput::LogPostError error="429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\n" The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 3.5. OTLP data ingestion in Loki Logging 6.1 enables an API endpoint using the OpenTelemetry Protocol (OTLP). As OTLP is a standardized format not specifically designed for Loki, it requires additional configuration on Loki's side to map OpenTelemetry's data format to Loki's data model. OTLP lacks concepts such as stream labels or structured metadata . Instead, OTLP provides metadata about log entries as attributes , grouped into three categories: Resource Scope Log This allows metadata to be set for multiple entries simultaneously or individually as needed. 3.5.1. Configuring LokiStack for OTLP data ingestion Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: Prerequisites Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. Procedure Set the schema version: When creating a new LokiStack CR, set version: v13 in the storage schema configuration. Note For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). Configure the storage schema as follows: Example configure storage schema # ... spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25 Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. 3.5.2. Attribute mapping When the Loki Operator is set to openshift-logging mode, it automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with Loki's stream labels and structured metadata. For typical setups, these default mappings should be sufficient. However, you might need to customize attribute mapping in the following cases: Using a custom Collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki. Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. Important Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki. 3.5.2.1. Custom attribute mapping for OpenShift When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift defaults, but custom mappings can be configured to adjust these. Custom mappings allow further configurations to meet specific needs. In openshift-logging mode, custom attribute mappings can be configured globally for all tenants or for individual tenants as needed. When custom mappings are defined, they are appended to the OpenShift defaults. If default recommended labels are not required, they can be disabled in the tenant configuration. Note A major difference between the Loki Operator and Loki itself lies in inheritance handling. Loki only copies default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in openshift-logging mode. Within LokiStack , attribute mapping configuration is managed through the limits setting: # ... spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2 1 Global OTLP attribute configuration. 2 OTLP attribute configuration for the application tenant within openshift-logging mode. Note Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement. Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects: spec: limits: global: otlp: streamLabels: resourceAttributes: - name: "k8s.namespace.name" - name: "k8s.pod.name" - name: "k8s.container.name" Structured metadata, in contrast, can be generated from resource, scope or log-level attributes: # ... spec: limits: global: otlp: streamLabels: # ... structuredMetadata: resourceAttributes: - name: "process.command_line" - name: "k8s\\.pod\\.labels\\..+" regex: true scopeAttributes: - name: "service.name" logAttributes: - name: "http.route" Tip Use regular expressions by setting regex: true for attributes names when mapping similar attributes in Loki. Important Avoid using regular expressions for stream labels, as this can increase data volume. 3.5.2.2. Customizing OpenShift defaults In openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended , might be disabled if performance is impacted. When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations. 3.5.2.3. Removing recommended attributes To reduce default attributes in openshift-logging mode, disable recommended attributes: # ... spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1 1 Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes . Note This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries. 3.5.3. Additional resources Loki labels Structured metadata OpenTelemetry attribute 3.6. OpenTelemetry data model This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging's OpenTelemetry support with Logging 6.1. Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.6.1. Forwarding and ingestion protocol Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification . OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources. 3.6.2. Semantic conventions The log collector in this solution gathers the following log streams: Container logs Cluster node journal logs Cluster node auditd logs Kubernetes and OpenShift API server logs OpenShift Virtual Network (OVN) logs You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name , cluster_id , pod_name , namespace , and possibly deployment or app_name . These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data. In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage. The following sections define the attributes that are generally forwarded. 3.6.2.1. Log entry structure All log streams include the following log data fields: The Applicable Sources column indicates which log sources each field applies to: all : This field is present in all logs. container : This field is present in Kubernetes container logs, both application and infrastructure. audit : This field is present in Kubernetes, OpenShift API, and OVN logs. auditd : This field is present in node auditd logs. journal : This field is present in node journal logs. Name Applicable Sources Comment body all observedTimeUnixNano all timeUnixNano all severityText container, journal attributes all (Optional) Present when forwarding stream specific attributes 3.6.2.2. Attributes Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table. The Location column specifies the type of attribute: resource : Indicates a resource attribute scope : Indicates a scope attribute log : Indicates a log attribute The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored: stream label : Enables efficient filtering and querying based on specific labels. Can be labeled as required if the Loki Operator enforces this attribute in the configuration. structured metadata : Allows for detailed filtering and storage of key-value pairs. Enables users to use direct labels for streamlined queries without requiring JSON parsing. With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries. Name Location Applicable Sources Storage (LokiStack) Comment log_source resource all required stream label (DEPRECATED) Compatibility attribute, contains same information as openshift.log.source log_type resource all required stream label (DEPRECATED) Compatibility attribute, contains same information as openshift.log.type kubernetes.container_name resource container stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.container.name kubernetes.host resource all stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.node.name kubernetes.namespace_name resource container required stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.namespace.name kubernetes.pod_name resource container stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.pod.name openshift.cluster_id resource all (DEPRECATED) Compatibility attribute, contains same information as openshift.cluster.uid level log container, journal (DEPRECATED) Compatibility attribute, contains same information as severityText openshift.cluster.uid resource all required stream label openshift.log.source resource all required stream label openshift.log.type resource all required stream label openshift.labels.* resource all structured metadata k8s.node.name resource all stream label k8s.namespace.name resource container required stream label k8s.container.name resource container stream label k8s.pod.labels.* resource container structured metadata k8s.pod.name resource container stream label k8s.pod.uid resource container structured metadata k8s.cronjob.name resource container stream label Conditionally forwarded based on creator of pod k8s.daemonset.name resource container stream label Conditionally forwarded based on creator of pod k8s.deployment.name resource container stream label Conditionally forwarded based on creator of pod k8s.job.name resource container stream label Conditionally forwarded based on creator of pod k8s.replicaset.name resource container structured metadata Conditionally forwarded based on creator of pod k8s.statefulset.name resource container stream label Conditionally forwarded based on creator of pod log.iostream log container structured metadata k8s.audit.event.level log audit structured metadata k8s.audit.event.stage log audit structured metadata k8s.audit.event.user_agent log audit structured metadata k8s.audit.event.request.uri log audit structured metadata k8s.audit.event.response.code log audit structured metadata k8s.audit.event.annotation.* log audit structured metadata k8s.audit.event.object_ref.resource log audit structured metadata k8s.audit.event.object_ref.name log audit structured metadata k8s.audit.event.object_ref.namespace log audit structured metadata k8s.audit.event.object_ref.api_group log audit structured metadata k8s.audit.event.object_ref.api_version log audit structured metadata k8s.user.username log audit structured metadata k8s.user.groups log audit structured metadata process.executable.name resource journal structured metadata process.executable.path resource journal structured metadata process.command_line resource journal structured metadata process.pid resource journal structured metadata service.name resource journal stream label systemd.t.* log journal structured metadata systemd.u.* log journal structured metadata Note Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases. Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: ( . , / , - ) will be replaced by underscores ( _ ). For example, k8s.namespace.name will become k8s_namespace_name . 3.6.3. Additional resources Semantic Conventions Logs Data Model General Logs Attributes 3.7. Visualization for logging Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator , which requires Operator installation. Important Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
[ "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging", "oc create sa collector -n openshift-logging", "oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector", "oc project openshift-logging", "oc adm policy add-cluster-role-to-user collect-application-logs -z collector", "oc adm policy add-cluster-role-to-user collect-audit-logs -z collector", "oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging", "oc create sa collector -n openshift-logging", "oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector", "oc project openshift-logging", "oc adm policy add-cluster-role-to-user collect-application-logs -z collector", "oc adm policy add-cluster-role-to-user collect-audit-logs -z collector", "oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp", "oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 Annotations <1> rules: Specifies the permissions granted by this ClusterRole. <2> apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. <3> loki.grafana.com: The API group for managing Loki-related resources. <4> resources: The resource type that the ClusterRole grants permission to interact with. <5> application: Refers to the application resources within the Loki logging system. <6> resourceNames: Specifies the names of resources that this role can manage. <7> logs: Refers to the log resources that can be created. <8> verbs: The actions allowed on the resources. <9> create: Grants permission to create new logs in the Loki system.", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp", "java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)", "apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application", "oc apply -f <filename>.yaml", "oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>", "oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>", "apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7", "apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6", "oc apply -f <filename>.yaml", "oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging", "oc apply -f <filename>.yaml", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc explain lokistack.spec.template", "KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.", "oc explain lokistack.spec.template.compactor", "KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4", "oc get pods --field-selector status.phase==Pending -n openshift-logging", "NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m", "oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r", "storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1", "oc delete pvc <pvc_name> -n openshift-logging", "oc delete pod <pod_name> -n openshift-logging", "oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded", "2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true", "2023-08-30 14:52:15 +0000 [warn]: [default_loki_infra] failed to flush the buffer. retry_times=2 next_retry_time=2023-08-30 14:52:19 +0000 chunk=\"604251225bf5378ed1567231a1c03b8b\" error_class=Fluent::Plugin::LokiOutput::LogPostError error=\"429 Too Many Requests Ingestion rate limit exceeded for user infrastructure (limit: 4194304 bytes/sec) while attempting to ingest '4082' lines totaling '7820025' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased\\n\"", "level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2", "spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25", "spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2", "spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"", "spec: limits: global: otlp: streamLabels: # structuredMetadata: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"", "spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/logging/logging-6-1
2.8. RHEA-2011:0658 - new package: icedtea-web
2.8. RHEA-2011:0658 - new package: icedtea-web New icedtea-web packages are now available for Red Hat Enterprise Linux 6. The IcedTea-Web project provides a Java web browser plug-in and an implementation of Java Web Start, which is based on the NetX project. It also contains a preview version of a configuration tool for managing deployment settings for the plug-in and Web Start implementations. This enhancement update adds the icedtea-web packages to Red Hat Enterprise Linux 6. (BZ# 664063 ) All users who require icedtea-web are advised to install these new packages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_technical_notes/icedtea-web_new
Troubleshooting Central
Troubleshooting Central Red Hat Advanced Cluster Security for Kubernetes 4.7 Troubleshooting Central Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/troubleshooting_central/index
Chapter 4. AHC Websocket Component
Chapter 4. AHC Websocket Component Available as of Camel version 2.14 The ahc-ws component provides Websocket based endpoints for a client communicating with external servers over Websocket (as a client opening a websocket connection to an external server). The component uses the AHC component that in turn uses the Async Http Client library. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ahc-ws</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 4.1. URI Format ahc-ws://hostname[:port][/resourceUri][?options] ahc-wss://hostname[:port][/resourceUri][?options] Will by default use port 80 for ahc-ws and 443 for ahc-wss. 4.2. AHC-WS Options As the AHC-WS component is based on the AHC component, you can use the various configuration options of the AHC component. The AHC Websocket component supports 8 options, which are listed below. Name Description Default Type client (advanced) To use a custom AsyncHttpClient AsyncHttpClient binding (advanced) To use a custom AhcBinding which allows to control how to bind between AHC and Camel. AhcBinding clientConfig (advanced) To configure the AsyncHttpClient to use a custom com.ning.http.client.AsyncHttpClientConfig instance. AsyncHttpClientConfig sslContextParameters (security) Reference to a org.apache.camel.util.jsse.SSLContextParameters in the Registry. Note that configuring this option will override any SSL/TLS configuration options provided through the clientConfig option at the endpoint or component level. SSLContextParameters allowJavaSerialized Object (advanced) Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The AHC Websocket endpoint is configured using URI syntax: with the following path and query parameters: 4.2.1. Path Parameters (1 parameters): Name Description Default Type httpUri Required The URI to use such as http://hostname:port/path URI 4.2.2. Query Parameters (18 parameters): Name Description Default Type bridgeEndpoint (common) If the option is true, then the Exchange.HTTP_URI header is ignored, and use the endpoint's URI for request. You may also set the throwExceptionOnFailure to be false to let the AhcProducer send all the fault response back. false boolean bufferSize (common) The initial in-memory buffer size used when transferring data between Camel and AHC Client. 4096 int headerFilterStrategy (common) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy throwExceptionOnFailure (common) Option to disable throwing the AhcOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true boolean transferException (common) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type (for example using Jetty or Servlet Camel components). On the producer side the exception will be deserialized and thrown as is, instead of the AhcOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean sendMessageOnError (consumer) Whether to send an message if the web-socket listener received an error. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern connectionClose (producer) Define if the Connection Close header has to be added to HTTP Request. This parameter is false by default false boolean cookieHandler (producer) Configure a cookie handler to maintain a HTTP session CookieHandler useStreaming (producer) To enable streaming to send data as multiple text fragments. false boolean binding (advanced) To use a custom AhcBinding which allows to control how to bind between AHC and Camel. AhcBinding clientConfig (advanced) To configure the AsyncHttpClient to use a custom com.ning.http.client.AsyncHttpClientConfig instance. AsyncHttpClientConfig clientConfigOptions (advanced) To configure the AsyncHttpClientConfig using the key/values from the Map. Map synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean clientConfigRealmOptions (security) To configure the AsyncHttpClientConfig Realm using the key/values from the Map. Map sslContextParameters (security) Reference to a org.apache.camel.util.jsse.SSLContextParameters in the Registry. This reference overrides any configured SSLContextParameters at the component level. See Using the JSSE Configuration Utility. Note that configuring this option will override any SSL/TLS configuration options provided through the clientConfig option at the endpoint or component level. SSLContextParameters 4.3. Spring Boot Auto-Configuration The component supports 9 options, which are listed below. Name Description Default Type camel.component.ahc-ws.allow-java-serialized-object Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean camel.component.ahc-ws.binding To use a custom AhcBinding which allows to control how to bind between AHC and Camel. The option is a org.apache.camel.component.ahc.AhcBinding type. String camel.component.ahc-ws.client To use a custom AsyncHttpClient. The option is a org.asynchttpclient.AsyncHttpClient type. String camel.component.ahc-ws.client-config To configure the AsyncHttpClient to use a custom com.ning.http.client.AsyncHttpClientConfig instance. The option is a org.asynchttpclient.AsyncHttpClientConfig type. String camel.component.ahc-ws.enabled Enable ahc-ws component true Boolean camel.component.ahc-ws.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. String camel.component.ahc-ws.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.ahc-ws.ssl-context-parameters Reference to a org.apache.camel.util.jsse.SSLContextParameters in the Registry. Note that configuring this option will override any SSL/TLS configuration options provided through the clientConfig option at the endpoint or component level. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.ahc-ws.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean 4.4. Writing and Reading Data over Websocket An ahc-ws endpoint can either write data to the socket or read from the socket, depending on whether the endpoint is configured as the producer or the consumer, respectively. 4.5. Configuring URI to Write or Read Data In the route below, Camel will write to the specified websocket connection. from("direct:start") .to("ahc-ws://targethost"); And the equivalent Spring sample: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="ahc-ws://targethost"/> </route> </camelContext> In the route below, Camel will read from the specified websocket connection. from("ahc-ws://targethost") .to("direct:"); And the equivalent Spring sample: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="ahc-ws://targethost"/> <to uri="direct:"/> </route> </camelContext> 4.6. See Also Configuring Camel Component Endpoint Getting Started AHC Atmosphere-Websocket
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ahc-ws</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "ahc-ws://hostname[:port][/resourceUri][?options] ahc-wss://hostname[:port][/resourceUri][?options]", "ahc-ws:httpUri", "from(\"direct:start\") .to(\"ahc-ws://targethost\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"ahc-ws://targethost\"/> </route> </camelContext>", "from(\"ahc-ws://targethost\") .to(\"direct:next\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"ahc-ws://targethost\"/> <to uri=\"direct:next\"/> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ahc-ws-component
5.6. Starting haproxy
5.6. Starting haproxy To start the HAProxy service, enter the following command: To make the HAProxy service persist through reboots, enter the following command:
[ "systemctl start haproxy.service", "systemctl enable haproxy.service" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/load_balancer_administration/s1-haproxy-setup-starting
Chapter 2. Using Self Node Remediation
Chapter 2. Using Self Node Remediation You can use the Self Node Remediation Operator to automatically reboot unhealthy nodes. This remediation strategy minimizes downtime for stateful applications and ReadWriteOnce (RWO) volumes, and restores compute capacity if transient failures occur. 2.1. About the Self Node Remediation Operator The Self Node Remediation Operator runs on the cluster nodes and reboots nodes that are identified as unhealthy. The Operator uses the MachineHealthCheck or NodeHealthCheck controller to detect the health of a node in the cluster. When a node is identified as unhealthy, the MachineHealthCheck or the NodeHealthCheck resource creates the SelfNodeRemediation custom resource (CR), which triggers the Self Node Remediation Operator. The SelfNodeRemediation CR resembles the following YAML file: apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediation metadata: name: selfnoderemediation-sample namespace: openshift-workload-availability spec: remediationStrategy: <remediation_strategy> 1 status: lastError: <last_error_message> 2 1 Specifies the remediation strategy for the nodes. 2 Displays the last error that occurred during remediation. When remediation succeeds or if no errors occur, the field is left empty. The Self Node Remediation Operator minimizes downtime for stateful applications and restores compute capacity if transient failures occur. You can use this Operator regardless of the management interface, such as IPMI or an API to provision a node, and regardless of the cluster installation type, such as installer-provisioned infrastructure or user-provisioned infrastructure. 2.1.1. About watchdog devices Watchdog devices can be any of the following: Independently powered hardware devices Hardware devices that share power with the hosts they control Virtual devices implemented in software, or softdog Hardware watchdog and softdog devices have electronic or software timers, respectively. These watchdog devices are used to ensure that the machine enters a safe state when an error condition is detected. The cluster is required to repeatedly reset the watchdog timer to prove that it is in a healthy state. This timer might elapse due to fault conditions, such as deadlocks, CPU starvation, and loss of network or disk access. If the timer expires, the watchdog device assumes that a fault has occurred and the device triggers a forced reset of the node. Hardware watchdog devices are more reliable than softdog devices. 2.1.1.1. Understanding Self Node Remediation Operator behavior with watchdog devices The Self Node Remediation Operator determines the remediation strategy based on the watchdog devices that are present. If a hardware watchdog device is configured and available, the Operator uses it for remediation. If a hardware watchdog device is not configured, the Operator enables and uses a softdog device for remediation. If neither watchdog devices are supported, either by the system or by the configuration, the Operator remediates nodes by using software reboot. Additional resources Configuring a watchdog device for the virtual machine 2.2. Control plane fencing In earlier releases, you could enable Self Node Remediation and Node Health Check on worker nodes. In the event of node failure, you can now also follow remediation strategies on control plane nodes. Self Node Remediation occurs in two primary scenarios. API Server Connectivity In this scenario, the control plane node to be remediated is not isolated. It can be directly connected to the API Server, or it can be indirectly connected to the API Server through worker nodes or control-plane nodes, that are directly connected to the API Server. When there is API Server Connectivity, the control plane node is remediated only if the Node Health Check Operator has created a SelfNodeRemediation custom resource (CR) for the node. No API Server Connectivity In this scenario, the control plane node to be remediated is isolated from the API Server. The node cannot connect directly or indirectly to the API Server. When there is no API Server Connectivity, the control plane node will be remediated as outlined with these steps: Check the status of the control plane node with the majority of the peer worker nodes. If the majority of the peer worker nodes cannot be reached, the node will be analyzed further. Self-diagnose the status of the control plane node If self diagnostics passed, no action will be taken. If self diagnostics failed, the node will be fenced and remediated. The self diagnostics currently supported are checking the kubelet service status, and checking endpoint availability using opt in configuration. If the node did not manage to communicate to most of its worker peers, check the connectivity of the control plane node with other control plane nodes. If the node can communicate with any other control plane peer, no action will be taken. Otherwise, the node will be fenced and remediated. 2.3. Installing the Self Node Remediation Operator by using the web console You can use the Red Hat OpenShift web console to install the Self Node Remediation Operator. Note The Node Health Check Operator also installs the Self Node Remediation Operator as a default remediation provider. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the Red Hat OpenShift web console, navigate to Operators OperatorHub . Select the Self Node Remediation Operator from the list of available Operators, and then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator is installed to the openshift-workload-availability namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-workload-availability namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs of the self-node-remediation-controller-manager pod and self-node-remediation-ds pods in the openshift-workload-availability project for any reported issues. 2.4. Installing the Self Node Remediation Operator by using the CLI You can use the OpenShift CLI ( oc ) to install the Self Node Remediation Operator. You can install the Self Node Remediation Operator in your own namespace or in the openshift-workload-availability namespace. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a Namespace custom resource (CR) for the Self Node Remediation Operator: Define the Namespace CR and save the YAML file, for example, workload-availability-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability To create the Namespace CR, run the following command: USD oc create -f workload-availability-namespace.yaml Create an OperatorGroup CR: Define the OperatorGroup CR and save the YAML file, for example, workload-availability-operator-group.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability To create the OperatorGroup CR, run the following command: USD oc create -f workload-availability-operator-group.yaml Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, self-node-remediation-subscription.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: self-node-remediation-operator namespace: openshift-workload-availability 1 spec: channel: stable installPlanApproval: Manual 2 name: self-node-remediation-operator source: redhat-operators sourceNamespace: openshift-marketplace package: self-node-remediation 1 Specify the Namespace where you want to install the Self Node Remediation Operator. To install the Self Node Remediation Operator in the openshift-workload-availability namespace, specify openshift-workload-availability in the Subscription CR. 2 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. To create the Subscription CR, run the following command: USD oc create -f self-node-remediation-subscription.yaml Verify that the Self Node Remediation Operator created the SelfNodeRemediationTemplate CR: USD oc get selfnoderemediationtemplate -n openshift-workload-availability Example output self-node-remediation-automatic-strategy-template Verification Verify that the installation succeeded by inspecting the CSV resource: USD oc get csv -n openshift-workload-availability Example output NAME DISPLAY VERSION REPLACES PHASE self-node-remediation.v0.8.0 Self Node Remediation Operator v.0.8.0 self-node-remediation.v0.7.1 Succeeded Verify that the Self Node Remediation Operator is up and running: USD oc get deployment -n openshift-workload-availability Example output NAME READY UP-TO-DATE AVAILABLE AGE self-node-remediation-controller-manager 1/1 1 1 28h Verify that the Self Node Remediation Operator created the SelfNodeRemediationConfig CR: USD oc get selfnoderemediationconfig -n openshift-workload-availability Example output NAME AGE self-node-remediation-config 28h Verify that each self node remediation pod is scheduled and running on each worker node and control plane node: USD oc get daemonset -n openshift-workload-availability Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE self-node-remediation-ds 6 6 6 6 6 <none> 28h 2.5. Configuring the Self Node Remediation Operator The Self Node Remediation Operator creates the SelfNodeRemediationConfig CR and the SelfNodeRemediationTemplate Custom Resource Definition (CRD). Note To avoid unexpected reboots of a specific node, the Node Maintenance Operator places the node in maintenance mode and automatically adds a node selector that prevents the SNR daemonset from running on the specific node. 2.5.1. Understanding the Self Node Remediation Operator configuration The Self Node Remediation Operator creates the SelfNodeRemediationConfig CR with the name self-node-remediation-config . The CR is created in the namespace of the Self Node Remediation Operator. A change in the SelfNodeRemediationConfig CR re-creates the Self Node Remediation daemon set. The SelfNodeRemediationConfig CR resembles the following YAML file: apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationConfig metadata: name: self-node-remediation-config namespace: openshift-workload-availability spec: safeTimeToAssumeNodeRebootedSeconds: 180 1 watchdogFilePath: /dev/watchdog 2 isSoftwareRebootEnabled: true 3 apiServerTimeout: 15s 4 apiCheckInterval: 5s 5 maxApiErrorThreshold: 3 6 peerApiServerTimeout: 5s 7 peerDialTimeout: 5s 8 peerRequestTimeout: 5s 9 peerUpdateInterval: 15m 10 hostPort: 30001 11 customDsTolerations: 12 - effect: NoSchedule key: node-role.kubernetes.io.infra operator: Equal value: "value1" tolerationSeconds: 3600 1 Specify an optional time duration that the Operator waits before recovering affected workloads running on an unhealthy node. Starting replacement pods while they are still running on the failed node can lead to data corruption and a violation of run-once semantics. The Operator calculates a minimum duration using the values in the ApiServerTimeout , ApiCheckInterval , MaxApiErrorThreshold , PeerDialTimeout , and PeerRequestTimeout fields, as well as the watchdog timeout and the cluster size at the time of remediation. To check the minimum duration calculation, view the manager pod logs and find references to the calculated minimum time in seconds . If you specify a value that is lower than the minimum duration, the Operator uses the minimum duration. However, if you want to increase the duration to a value higher than this minimum value, you can set safeTimeToAssumeNodeRebootedSeconds to a value higher than the minimum duration. 2 Specify the file path of the watchdog device in the nodes. If you enter an incorrect path to the watchdog device, the Self Node Remediation Operator automatically detects the softdog device path. If a watchdog device is unavailable, the SelfNodeRemediationConfig CR uses a software reboot. 3 Specify if you want to enable software reboot of the unhealthy nodes. By default, the value of isSoftwareRebootEnabled is set to true . To disable the software reboot, set the parameter value to false . 4 Specify the timeout duration to check connectivity with each API server. When this duration elapses, the Operator starts remediation. The timeout duration must be greater than or equal to 10 milliseconds. 5 Specify the frequency to check connectivity with each API server. The timeout duration must be greater than or equal to 1 second. 6 Specify a threshold value. After reaching this threshold, the node starts contacting its peers. The threshold value must be greater than or equal to 1 second. 7 Specify the duration of the timeout for the peer to connect the API server. The timeout duration must be greater than or equal to 10 milliseconds. 8 Specify the duration of the timeout for establishing connection with the peer. The timeout duration must be greater than or equal to 10 milliseconds. 9 Specify the duration of the timeout to get a response from the peer. The timeout duration must be greater than or equal to 10 milliseconds. 10 Specify the frequency to update peer information such as IP address. The timeout duration must be greater than or equal to 10 seconds. 11 Specify an optional value to change the port that Self Node Remediation agents use for internal communication. The value must be greater than 0. The default value is port 30001. 12 Specify custom toleration Self Node Remediation agents that are running on the DaemonSets to support remediation for different types of nodes. You can configure the following fields: effect : The effect indicates the taint effect to match. If this field is empty, all taint effects are matched. When specified, allowed values are NoSchedule , PreferNoSchedule and NoExecute . key : The key is the taint key that the toleration applies to. If this field is empty, all taint keys are matched. If the key is empty, the operator field must be Exists . This combination means to match all values and all keys. operator : The operator represents a key's relationship to the value. Valid operators are Exists and Equal . The default is Equal . Exists is equivalent to a wildcard for a value, so that a pod can tolerate all taints of a particular category. value : The taint value the toleration matches to. If the operator is Exists , the value should be empty, otherwise it is just a regular string. tolerationSeconds : The period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (that is, do not evict). Zero and negative values will be treated as 0 (that is evict immediately) by the system. Custom toleration allows you to add a toleration to the Self Node Remediation agent pod. For more information, see Using tolerations to control OpenShift Logging pod placement . Note You can edit the self-node-remediation-config CR that is created by the Self Node Remediation Operator. However, when you try to create a new CR for the Self Node Remediation Operator, the following message is displayed in the logs: controllers.SelfNodeRemediationConfig ignoring selfnoderemediationconfig CRs that are not named 'self-node-remediation-config' or not in the namespace of the operator: 'openshift-workload-availability' {"selfnoderemediationconfig": "openshift-workload-availability/selfnoderemediationconfig-copy"} 2.5.2. Understanding the Self Node Remediation Template configuration The Self Node Remediation Operator also creates the SelfNodeRemediationTemplate Custom Resource Definition (CRD). This CRD defines the remediation strategy for the nodes. The following remediation strategies are available: Automatic This remediation strategy simplifies the remediation process by letting the Self Node Remediation Operator decide on the most suitable remediation strategy for the cluster. This strategy checks if the OutOfServiceTaint strategy is available on the cluster. If the OutOfServiceTaint strategy is available, the Operator selects the OutOfServiceTaint strategy. If the OutOfServiceTaint strategy is not available, the Operator selects the ResourceDeletion strategy. Automatic is the default remediation strategy. ResourceDeletion This remediation strategy removes the pods on the node, rather than the removal of the node object. This strategy recovers workloads faster. OutOfServiceTaint This remediation strategy implicitly causes the removal of the pods and associated volume attachments on the node, rather than the removal of the node object. It achieves this by placing the OutOfServiceTaint strategy on the node. This strategy recovers workloads faster. This strategy has been supported on technology preview since OpenShift Container Platform version 4.13, and on general availability since OpenShift Container Platform version 4.15. The Self Node Remediation Operator creates the SelfNodeRemediationTemplate CR for the strategy self-node-remediation-automatic-strategy-template , which the Automatic remediation strategy uses. The SelfNodeRemediationTemplate CR resembles the following YAML file: apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationTemplate metadata: creationTimestamp: "2022-03-02T08:02:40Z" name: self-node-remediation-<remediation_object>-deletion-template 1 namespace: openshift-workload-availability spec: template: spec: remediationStrategy: <remediation_strategy> 2 1 Specifies the type of remediation template based on the remediation strategy. Replace <remediation_object> with either resource or node ; for example, self-node-remediation-resource-deletion-template . 2 Specifies the remediation strategy. The default remediation strategy is Automatic . 2.5.3. Troubleshooting the Self Node Remediation Operator 2.5.3.1. General troubleshooting Issue You want to troubleshoot issues with the Self Node Remediation Operator. Resolution Check the Operator logs. 2.5.3.2. Checking the daemon set Issue The Self Node Remediation Operator is installed but the daemon set is not available. Resolution Check the Operator logs for errors or warnings. 2.5.3.3. Unsuccessful remediation Issue An unhealthy node was not remediated. Resolution Verify that the SelfNodeRemediation CR was created by running the following command: USD oc get snr -A If the MachineHealthCheck controller did not create the SelfNodeRemediation CR when the node turned unhealthy, check the logs of the MachineHealthCheck controller. Additionally, ensure that the MachineHealthCheck CR includes the required specification to use the remediation template. If the SelfNodeRemediation CR was created, ensure that its name matches the unhealthy node or the machine object. 2.5.3.4. Daemon set and other Self Node Remediation Operator resources exist even after uninstalling the Operator Issue The Self Node Remediation Operator resources, such as the daemon set, configuration CR, and the remediation template CR, exist even after after uninstalling the Operator. Resolution To remove the Self Node Remediation Operator resources, delete the resources by running the following commands for each resource type: USD oc delete ds <self-node-remediation-ds> -n <namespace> USD oc delete snrc <self-node-remediation-config> -n <namespace> USD oc delete snrt <self-node-remediation-template> -n <namespace> 2.5.4. Gathering data about the Self Node Remediation Operator To collect debugging information about the Self Node Remediation Operator, use the must-gather tool. For information about the must-gather image for the Self Node Remediation Operator, see Gathering data about specific features . 2.5.5. Additional resources Using Operator Lifecycle Manager on restricted networks . Deleting Operators from a cluster
[ "apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediation metadata: name: selfnoderemediation-sample namespace: openshift-workload-availability spec: remediationStrategy: <remediation_strategy> 1 status: lastError: <last_error_message> 2", "apiVersion: v1 kind: Namespace metadata: name: openshift-workload-availability", "oc create -f workload-availability-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: workload-availability-operator-group namespace: openshift-workload-availability", "oc create -f workload-availability-operator-group.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: self-node-remediation-operator namespace: openshift-workload-availability 1 spec: channel: stable installPlanApproval: Manual 2 name: self-node-remediation-operator source: redhat-operators sourceNamespace: openshift-marketplace package: self-node-remediation", "oc create -f self-node-remediation-subscription.yaml", "oc get selfnoderemediationtemplate -n openshift-workload-availability", "self-node-remediation-automatic-strategy-template", "oc get csv -n openshift-workload-availability", "NAME DISPLAY VERSION REPLACES PHASE self-node-remediation.v0.8.0 Self Node Remediation Operator v.0.8.0 self-node-remediation.v0.7.1 Succeeded", "oc get deployment -n openshift-workload-availability", "NAME READY UP-TO-DATE AVAILABLE AGE self-node-remediation-controller-manager 1/1 1 1 28h", "oc get selfnoderemediationconfig -n openshift-workload-availability", "NAME AGE self-node-remediation-config 28h", "oc get daemonset -n openshift-workload-availability", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE self-node-remediation-ds 6 6 6 6 6 <none> 28h", "apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationConfig metadata: name: self-node-remediation-config namespace: openshift-workload-availability spec: safeTimeToAssumeNodeRebootedSeconds: 180 1 watchdogFilePath: /dev/watchdog 2 isSoftwareRebootEnabled: true 3 apiServerTimeout: 15s 4 apiCheckInterval: 5s 5 maxApiErrorThreshold: 3 6 peerApiServerTimeout: 5s 7 peerDialTimeout: 5s 8 peerRequestTimeout: 5s 9 peerUpdateInterval: 15m 10 hostPort: 30001 11 customDsTolerations: 12 - effect: NoSchedule key: node-role.kubernetes.io.infra operator: Equal value: \"value1\" tolerationSeconds: 3600", "controllers.SelfNodeRemediationConfig ignoring selfnoderemediationconfig CRs that are not named 'self-node-remediation-config' or not in the namespace of the operator: 'openshift-workload-availability' {\"selfnoderemediationconfig\": \"openshift-workload-availability/selfnoderemediationconfig-copy\"}", "apiVersion: self-node-remediation.medik8s.io/v1alpha1 kind: SelfNodeRemediationTemplate metadata: creationTimestamp: \"2022-03-02T08:02:40Z\" name: self-node-remediation-<remediation_object>-deletion-template 1 namespace: openshift-workload-availability spec: template: spec: remediationStrategy: <remediation_strategy> 2", "oc get snr -A", "oc delete ds <self-node-remediation-ds> -n <namespace>", "oc delete snrc <self-node-remediation-config> -n <namespace>", "oc delete snrt <self-node-remediation-template> -n <namespace>" ]
https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.3/html/remediation_fencing_and_maintenance/self-node-remediation-operator-remediate-nodes
3.2. Type Conversions
3.2. Type Conversions Data types may be converted from one form to another either explicitly or implicitly. Implicit conversions automatically occur in criteria and expressions to ease development. Explicit data type conversions require the use of the CONVERT function or CAST keyword. Note Array conversions are only valid if you use them to convert or cast to and from compatible object arrays. You cannot, for example, cast from integer[] to long[] . Type Conversion Considerations Any type may be implicitly converted to the OBJECT type. The OBJECT type may be explicitly converted to any other type. The NULL value may be converted to any type. Any valid implicit conversion is also a valid explicit conversion. Situations involving literal values that would normally require explicit conversions may have the explicit conversion applied implicitly if no loss of information occurs. If widenComparisonToString is false (the default), when Red Hat JBoss Data Virtualization detects that an explicit conversion that can not be applied implicitly in criteria, it will throw an exception. If widenComparisonToString is true, then depending upon the comparison, a widening conversion is applied or the criteria are treated as false. With widenComparisonToString is false and created_by is a date, rather than converting not a date to a date value, Red Hat JBoss Data Virtualization throws an exception. When Red Hat JBoss Data Virtualization detects that an explicit conversion can not be applied implicitly in criteria, the criteria will be treated as false. For example: SELECT * FROM my.table WHERE created_by = 'not a date' Given that created_by is typed as date, rather than converting 'not a date' to a date value, the criteria will remain as a string comparison and therefore be false. Explicit conversions that are not allowed between two types will result in an exception before execution. Allowed explicit conversions may still fail during processing if the runtime values are not actually convertible. Warning The JBoss Data Virtualization conversions of float/double/bigdecimal/timestamp to string rely on the JDBC/Java defined output formats. Pushdown behavior attempts to mimic these results, but may vary depending upon the actual source type and conversion logic. Care must be taken to not assume the string form in criteria or other places where a variation may cause different results. Table 3.2. Type Conversions Source Type Valid Implicit Target Types Valid Explicit Target Types string clob char, boolean, byte, short, integer, long, biginteger, float, double, bigdecimal, xml [a] char string boolean string, byte, short, integer, long, biginteger, float, double, bigdecimal byte string, short, integer, long, biginteger, float, double, bigdecimal boolean short string, integer, long, biginteger, float, double, bigdecimal boolean, byte integer string, long, biginteger, double, bigdecimal boolean, byte, short, float long string, biginteger, bigdecimal boolean, byte, short, integer, float, double biginteger string, bigdecimal boolean, byte, short, integer, long, float, double bigdecimal string boolean, byte, short, integer, long, biginteger, float, double date string, timestamp time string, timestamp timestamp string date, time clob string xml string [b] [a] string to xml is equivalent to XMLPARSE(DOCUMENT exp). [b] xml to string is equivalent to XMLSERIALIZE(exp AS STRING).
[ "SELECT * FROM my.table WHERE created_by = 'not a date'", "SELECT * FROM my.table WHERE created_by = 'not a date'" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/Type_Conversions
Chapter 2. Installation
Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.7 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux subscriptions listed in the Knowledgebase article How to use Red Hat Software Collections (RHSCL) or Red Hat Developer Toolset (DTS)? . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using and Configuring Red Hat Subscription Manager . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional repository, which are listed in Section 2.1.2, "Packages from the Optional Repository" , cannot be installed from the ISO image. Note Packages that require the Optional repository cannot be installed from the ISO image. A list of packages that require enabling of the Optional repository is provided in Section 2.1.2, "Packages from the Optional Repository" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using and Configuring Red Hat Subscription Manager . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Repository Some of the Red Hat Software Collections packages require the Optional repository to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this repository, see the relevant Knowledgebase article How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Management (RHSM)? . Packages from Software Collections for Red Hat Enterprise Linux that require the Optional repository to be enabled are listed in the tables below. Note that packages from the Optional repository are unsupported. For details, see the Knowledgebase article Support policy of the optional and supplementary channels in Red Hat Enterprise Linux . Table 2.1. Packages That Require Enabling of the Optional Repository in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Repository devtoolset-10-build scl-utils-build devtoolset-10-dyninst-testsuite glibc-static devtoolset-10-elfutils-debuginfod bsdtar devtoolset-10-gcc-plugin-devel libmpc-devel devtoolset-10-gdb source-highlight devtoolset-9-build scl-utils-build devtoolset-9-dyninst-testsuite glibc-static devtoolset-9-gcc-plugin-devel libmpc-devel devtoolset-9-gdb source-highlight httpd24-mod_ldap apr-util-ldap httpd24-mod_session apr-util-openssl python27-python-debug tix python27-python-devel scl-utils-build python27-tkinter tix rh-git227-git-cvs cvsps rh-git227-git-svn perl-Git-SVN, subversion rh-git227-perl-Git-SVN subversion-perl rh-java-common-ant-apache-bsf rhino rh-java-common-batik rhino rh-maven35-build scl-utils-build rh-maven35-xpp3-javadoc java-1.8.0-openjdk-javadoc-zip, java-11-openjdk-javadoc, java-1.7.0-openjdk-javadoc, java-11-openjdk-javadoc-zip, java-1.8.0-openjdk-javadoc rh-php73-php-devel pcre2-devel rh-php73-php-pspell aspell rh-python38-python-devel scl-utils-build 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.7 requires the removal of any earlier pre-release versions. If you have installed any version of Red Hat Software Collections 2.1 component, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections. The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install rh-php73 and rh-mariadb105 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl530-perl-CPAN and rh-perl530-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby27-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see How can I download or install debuginfo packages for RHEL systems? . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide .
[ "rhel- variant -rhscl-6-rpms rhel- variant -rhscl-6-debug-rpms rhel- variant -rhscl-6-source-rpms rhel-server-rhscl-6-eus-rpms rhel-server-rhscl-6-eus-source-rpms rhel-server-rhscl-6-eus-debug-rpms rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms rhel-server-rhscl-7-eus-rpms rhel-server-rhscl-7-eus-source-rpms rhel-server-rhscl-7-eus-debug-rpms", "~]# yum install rh-php73 rh-mariadb105", "~]# yum install rh-perl530-perl-CPAN rh-perl530-perl-Archive-Tar", "~]# debuginfo-install rh-ruby27-ruby" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.7_release_notes/chap-Installation
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 3.3-5 Wed Dec 20 2023 Lenka Spackova Fixed broken links and anchors. Revision 3.3-4 Fri Nov 12 2021 Lenka Spackova Updated Section 4.7, "Database Connectors" . Revision 3.3-3 Tue Mar 17 2020 Lenka Spackova Added a reference to container-specific upgrading instructions for PostgreSQL . Revision 3.3-2 Fri Nov 15 11 2019 Lenka Spackova Updated Migrating to MariaDB 10.3. Revision 3.3-1 Tue Jun 11 2019 Lenka Spackova Release of Red Hat Software Collections 3.3 Release Notes. Revision 3.3-0 Tue Apr 16 2019 Lenka Spackova Release of Red Hat Software Collections 3.3 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.3_release_notes/appe-documentation-3.3_release_notes-revision_history
4.366. yum-utils
4.366. yum-utils 4.366.1. RHBA-2011:1703 - yum-utils bug fix and enhancement update Updated yum-utils packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The yum-utils packages provide a collection of utilities and examples for the Yum package manager. Bug Fixes BZ# 694188 When using the version of the yum-groups-manager utility, an attempt to use the "-c" (or "--config") command line option to specify an alternative configuration file failed, and the utility incorrectly used the default /etc/yum.conf file. This update adapts the underlying source code to correct this error, and yum-groups-manager now accepts alternative configuration files as expected. BZ# 699470 Prior to this update, when the yumdownloader utility failed to download a requested package, it incorrectly exited with a status of 0. With this update, yumdownloader exits with a non-zero status in these situations. BZ# 709043 Due to an error in the detection of the return value of an internal method, the version of the yum-builddep utility failed to exit with a non-zero exit status when it encountered an error. This update applies a patch that ensures the return value of the aforementioned method is correctly evaluated, and when an error is encountered, yum-builddep now exits with a non-zero status as expected. BZ# 713108 Previously, when a user executed the reposync utility with the "-r" (or "--repoid") command line option, the utility incorrectly used repositories that were enabled in the configuration. This update applies a patch to make sure these command line options work correctly. BZ# 734428 When using the priorities plug-in, running the "yum update" command may have incorrectly failed to offer some packages for update. This update corrects this error, and the priorities plug-in no longer prevents "yum update" from fully updating the system. BZ# 659740 Prior to this update, certain commands in the EXAMPLES sections of the repoquery(1), show-installed(1), yum-filter-data(1), yum-groups-manager(1), yum-list-data(1), and yum-verify(1) manual pages used incorrect glyphs for single quotes. Consequent to this, an attempt to copy such a command and run it on the command line failed with an error. This update ensures that all command examples now use typewriter straight single quotes as expected. BZ# 720967 Various typos in the yum-security(8) manual page have been corrected. Enhancement BZ# 710469 Source repository patterns for Red Hat Network (RHN) have been added to the yumdownloader and yum-builddep utilities. All users of yum-utils are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/yum-utils
Chapter 2. Power monitoring overview
Chapter 2. Power monitoring overview Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. About power monitoring You can use power monitoring for Red Hat OpenShift to monitor the power usage and identify power-consuming containers running in an OpenShift Container Platform cluster. Power monitoring collects and exports energy-related system statistics from various components, such as CPU and DRAM. It provides granular power consumption data for Kubernetes pods, namespaces, and nodes. Warning Power monitoring Technology Preview works only in bare-metal deployments. Most public cloud vendors do not expose Kernel Power Management Subsystems to virtual machines. 2.2. Power monitoring architecture Power monitoring is made up of the following major components: The Power monitoring Operator For administrators, the Power monitoring Operator streamlines the monitoring of power usage for workloads by simplifying the deployment and management of Kepler in an OpenShift Container Platform cluster. The setup and configuration for the Power monitoring Operator are simplified by adding a Kepler custom resource definition (CRD). The Operator also manages operations, such as upgrading, removing, configuring, and redeploying Kepler. Kepler Kepler is a key component of power monitoring. It is responsible for monitoring the power usage of containers running in OpenShift Container Platform. It generates metrics related to the power usage of both nodes and containers. 2.3. Kepler hardware and virtualization support Kepler is the key component of power monitoring that collects real-time power consumption data from a node through one of the following methods: Kernel Power Management Subsystem (preferred) rapl-sysfs : This requires access to the /sys/class/powercap/intel-rapl host file. rapl-msr : This requires access to the /dev/cpu/*/msr host file. The estimator power source Without access to the kernel's power cap subsystem, Kepler uses a machine learning model to estimate the power usage of the CPU on the node. Warning The estimator feature is experimental, not supported, and should not be relied upon. You can identify the power estimation method for a node by using the Power Monitoring / Overview dashboard. 2.4. Additional resources Power monitoring dashboards overview
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/power_monitoring/power-monitoring-overview
10.2.2. Setting up Communication between Guest Agent and Host
10.2.2. Setting up Communication between Guest Agent and Host The host machine communicates with the guest agent through a VirtIO serial connection between the host and guest machines. A VirtIO serial channel is connected to the host via a character device driver (typically a Unix socket), and the guest listens on this serial channel. The following procedure shows how to set up the host and guest machines for guest agent use. Note For instructions on how to set up the QEMU guest agent on Windows guests, refer to the instructions found here . Procedure 10.1. Setting up communication between guest agent and host Open the guest XML Open the guest XML with the QEMU guest agent configuration. You will need the guest name to open the file. Use the command # virsh list on the host machine to list the guests that it can recognize. In this example, the guest's name is rhel6 : Edit the guest XML file Add the following elements to the XML file and save the changes. <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/rhel6.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Figure 10.1. Editing the guest XML to configure the QEMU guest agent Start the QEMU guest agent in the guest Download and install the guest agent in the guest virtual machine using yum install qemu-guest-agent if you have not done so already. Once installed, start the service as follows: You can now communicate with the guest by sending valid libvirt commands over the established character device driver.
[ "virsh edit rhel6", "<channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/rhel6.agent'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>", "service start qemu-guest-agent" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-qemu_guest_agent-set_up_communication_between_guest_agent_and_host
Chapter 6. Configuring the system and running tests by using Cockpit
Chapter 6. Configuring the system and running tests by using Cockpit To run the certification tests by using Cockpit you need to upload the test plan to the HUT first. After running the tests, download the results and review them. This chapter contains the following topics: Section 6.1, "Setting up the Cockpit server" Section 6.2, "Adding the host under test to Cockpit" Section 6.3, "Getting authorization on the Red Hat SSO network" Section 6.4, "Downloading test plans in Cockpit from Red Hat certification portal" Section 6.5, "Using the test plan to prepare the host under test for testing" Section 6.6, "Running the certification tests using Cockpit" Section 6.7, "Reviewing and downloading the test results file" Section 6.8, "Submitting the test results from Cockpit to the Red Hat Certification Portal" Section 6.9, "Uploading the results file of the executed test plan to Red Hat Certification portal" 6.1. Setting up the Cockpit server Cockpit is a RHEL tool that lets you change the configuration of your systems as well as monitor their resources from a user-friendly web-based interface. Note You must set up Cockpit on a new system, which is separate from the host under test. Ensure that the Cockpit has access to the host under test. For more information on installing and configuring Cockpit, see Getting Started using the RHEL web console on RHEL 8, Getting Started using the RHEL web console on RHEL 9 and Introducing Cockpit . Prerequisites The Cockpit server has RHEL version 8 or 9 installed. You have installed the Cockpit plugin on your system. You have enabled the Cockpit service. Procedure Log in to the system where you installed Cockpit. Install the Cockpit RPM provided by the Red Hat Certification team. You must run Cockpit on port 9090. 6.2. Adding the host under test to Cockpit Adding the host under test (HUT) to Cockpit lets the two systems communicate by using passwordless SSH. Prerequisites You have the IP address or hostname of the HUT. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser to launch the Cockpit web application. Enter the username and password, and then click Login . Click the down-arrow on the logged-in cockpit user name-> Add new host . The dialog box displays. In the Host field, enter the IP address or hostname of the system. In the User name field, enter the name you want to assign to this system. Optional: Select the predefined color or select a new color of your choice for the host added. Click Add . Click Accept key and connect to let Cockpit communicate with the system through passwordless SSH. Enter the Password . Select the Authorize SSH Key checkbox. Click Log in . Verification On the left panel, click Tools -> Red Hat Certification . Verify that the system you just added displays under the Hosts section on the right. 6.3. Getting authorization on the Red Hat SSO network Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. On the Cockpit homepage, click Authorize , to establish connectivity with the Red Hat system. The Log in to your Red Hat account page displays. Enter your credentials and click . The Grant access to rhcert-cwe page displays. Click Grant access . A confirmation message displays a successful device login. You are now connected to the Cockpit web application. 6.4. Downloading test plans in Cockpit from Red Hat certification portal For Non-authorized or limited access users: To download the test plan, see Downloading the test plan from Red Hat Certification portal . For authorized users: Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Test Plans tab. A list of Recent Certification Support Cases will appear. Click Download Test Plan . A message displays confirming the successful addition of the test plan. The downloaded test plan will be listed under the File Name of the Test Plan Files section. 6.5. Using the test plan to prepare the host under test for testing Provisioning the host under test performs a number of operations, such as setting up passwordless SSH communication with the cockpit, installing the required packages on your system based on the certification type, and creating a final test plan to run, which is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required hardware packages will be installed if the test plan is designed for certifying a hardware product. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab, and then click the host under test on which you want to run the tests. Click Provision . A dialog box appears. Click Upload, and then select the new test plan .xml file. Then, click . A successful upload message is displayed. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. In the Role field, select Host under test and click Submit . By default, the file is uploaded to path:`/var/rhcert/plans/<testplanfile.xml>` 6.6. Running the certification tests using Cockpit Prerequisites You have prepared the host under test . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab and click on the host on which you want to run the tests. Click the Terminal tab and select Run. A list of recommended tests based on the test plan uploaded displays. The final test plan to run is a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . 6.7. Reviewing and downloading the test results file Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab to view the test results generated. Optional: Click Preview to view the results of each test. Click Download beside the result files. By default, the result file is saved as /var/rhcert/save/hostname-date-time.xml . 6.8. Submitting the test results from Cockpit to the Red Hat Certification Portal Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab and select the case number from the displayed list. For the authorized users click Submit . A message displays confirming the successful upload of the test result file. For non-authorized users see, Uploading the results file of the executed test plan to Red Hat Certification portal . The test result file of the executed test plan will be uploaded to the Red Hat Certification portal. 6.9. Uploading the results file of the executed test plan to Red Hat Certification portal Prerequisites You have downloaded the test results file from either Cockpit or the HUT directly. Procedure Log in to Red Hat Certification portal . On the homepage, enter the product case number in the search bar. Select the case number from the list that is displayed. On the Summary tab, under the Files section, click Upload . steps Red Hat will review the results file you submitted and suggest the steps. For more information, visit Red Hat Certification portal .
[ "yum install redhat-certification-cockpit" ]
https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_certified_cloud_and_service_provider_certification_workflow_guide/assembly_cloud-wf-configuring-system-and-running-tests-by-using-cockpit_cloud-instance-wf-setting-test-environment
Installing on GCP
Installing on GCP OpenShift Container Platform 4.13 Installing OpenShift Container Platform on Google Cloud Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_gcp/index
Chapter 23. General Updates
Chapter 23. General Updates runc notifies systemd about user-specified CPU quota limits Previously, the runc program did not notify systemd about user-specified CPU quota limits when a container was started. Consequently, systemd was unaware of the user-specified limits, and therefore the CPU quota was reset to the default value (unlimited) during the systemctl daemon-reload operation. With this update, runc now notifies systemd about user-specified CPU quota limits when a container is started, and the described problem no longer occurs. (BZ#1455071) Segmentation faults in applications because of only non-existent paths in LD_LIBRARY_PATH no longer happen Previously, when the LD_LIBRARY_PATH environment variable contained only non-existent paths, the dynamic loader produced a segmentation fault. Consequently, applications terminated unexpectedly with a segmentation fault at startup in the described situation. The dynamic loader has been fixed. As a result, applications no longer terminate unexpectedly in the described situation. Note that updating the glibc package is enough to fix this bug for any affected applications. (BZ# 1443236 ) The setup package now creates the tape group with the correct group number Previously, when installing the setup package, the tape group was created with an ID that was inconsistent with all other versions of Red Hat Enterprise Linux. With this update, the group ID has been changed from 30 to the standard 33 . As a result, fresh installations of the operating system now have the correct group number for the tape group. On previously installed systems affected by this problem: 1. Edit the group ID in the /etc/group and /etc/gshadow files. 2. Change the group ownership for all files owned by the former tape group. (BZ#1433020)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug_fixes_general_updates
Chapter 5. Debugging issues
Chapter 5. Debugging issues Central saves information to its container logs. 5.1. Prerequisites You have configured the ROX_ENDPOINT environment variable using the following command: USD export ROX_ENDPOINT= <host:port> 1 1 The host and port information that you want to store in the ROX_ENDPOINT environment variable. 5.2. Viewing the logs You can use either the oc or kubectl command to view the logs for the Central pod. Procedure To view the logs for the Central pod by using kubectl , run the following command : USD kubectl logs -n stackrox <central_pod> To view the logs for the Central pod by using oc , run the following command : USD oc logs -n stackrox <central_pod> 5.3. Viewing the current log level You can change the log level to see more or less information in Central logs. Procedure Run the following command to view the current log level: USD roxctl central debug log Additional resources roxctl central debug 5.4. Changing the log level Procedure Run the following command to change the log level: USD roxctl central debug log --level= <log_level> 1 1 The acceptable values for <log_level> are Panic , Fatal , Error , Warn , Info , and Debug . Additional resources roxctl central debug 5.5. Retrieving debugging information Procedure Run the following command to gather the debugging information for investigating issues: USD roxctl central debug dump To generate a diagnostic bundle with the RHACS administrator password or API token and central address, follow the procedure in Generating a diagnostic bundle by using the roxctl CLI . Additional resources roxctl central debug
[ "export ROX_ENDPOINT= <host:port> 1", "kubectl logs -n stackrox <central_pod>", "oc logs -n stackrox <central_pod>", "roxctl central debug log", "roxctl central debug log --level= <log_level> 1", "roxctl central debug dump" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/roxctl_cli/debugging-issues-1
Chapter 2. Architectures
Chapter 2. Architectures Red Hat Enterprise Linux 7.4 is distributed with the kernel version 3.10.0-693, which provides support for the following architectures: [1] 64-bit AMD 64-bit Intel IBM POWER7+ and POWER8 (big endian) [2] IBM POWER8 (little endian) [3] IBM z Systems [4] [1] Note that the Red Hat Enterprise Linux 7.4 installation is supported only on 64-bit hardware. Red Hat Enterprise Linux 7.4 is able to run 32-bit operating systems, including versions of Red Hat Enterprise Linux, as virtual machines. [2] Red Hat Enterprise Linux 7.4 (big endian) is currently supported as a KVM guest on Red Hat Enterprise Virtualization for Power, and on PowerVM. [3] Red Hat Enterprise Linux 7.4 (little endian) is currently supported as a KVM guest on Red Hat Enterprise Virtualization for Power, on PowerVM, and PowerNV (bare metal). [4] Note that Red Hat Enterprise Linux 7.4 supports IBM zEnterprise 196 hardware or later; IBM z10 Systems mainframe systems are no longer supported and will not boot Red Hat Enterprise Linux 7.4.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/chap-red_hat_enterprise_linux-7.4_release_notes-architectures
Configuring your Red Hat build of Quarkus applications by using a YAML file
Configuring your Red Hat build of Quarkus applications by using a YAML file Red Hat build of Quarkus 3.2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_yaml_file/index
12.3.2. Zone File Resource Records
12.3.2. Zone File Resource Records The primary component of a zone file is its resource records. There are many types of zone file resource records. The following are used most frequently: A - Address record, which specifies an IP address to assign to a name, as in this example: If the <host> value is omitted, then an A record points to a default IP address for the top of the namespace. This system is the target for all non-FQDN requests. Consider the following A record examples for the example.com zone file: Requests for example.com are pointed to 10.0.1.3, while requests for server1.example.com are pointed to 10.0.1.5. CNAME - Canonical name record, maps one name to another. This type of record is also known as an alias record. The example tells named that any requests sent to the <alias-name> should point to the host, <real-name> . CNAME records are most commonly used to point to services that use a common naming scheme, such as www for Web servers. In the following example, an A record binds a hostname to an IP address, while a CNAME record points the commonly used www hostname to it. MX - Mail eXchange record, which tells where mail sent to a particular namespace controlled by this zone should go. In this example, the <preference-value> allows numerical ranking of the email servers for a namespace, giving preference to some email systems over others. The MX resource record with the lowest <preference-value> is preferred over the others. However, multiple email servers can possess the same value to distribute email traffic evenly among them. The <email-server-name> may be a hostname or FQDN. In this example, the first mail.example.com email server is preferred to the mail2.example.com email server when receiving email destined for the example.com domain. NS - NameServer record, which announces the authoritative nameservers for a particular zone. This is an example of an NS record: The <nameserver-name> should be a FQDN. , two nameservers are listed as authoritative for the domain. It is not important whether these nameservers are slaves or if one is a master; they are both still considered authoritative. PTR - PoinTeR record, designed to point to another part of the namespace. PTR records are primarily used for reverse name resolution, as they point IP addresses back to a particular name. Refer to Section 12.3.4, "Reverse Name Resolution Zone Files" for more examples of PTR records in use. SOA - Start Of Authority resource record, proclaims important authoritative information about a namespace to the nameserver. Located after the directives, an SOA resource record is the first resource record in a zone file. The following example shows the basic structure of an SOA resource record: The @ symbol places the USDORIGIN directive (or the zone's name, if the USDORIGIN directive is not set) as the namespace being defined by this SOA resource record. The hostname of the primary nameserver that is authoritative for this domain is the <primary-name-server> directive, and the email of the person to contact about this namespace is the <hostmaster-email> directive. The <serial-number> directive is a numerical value incremented every time the zone file is altered to indicate it is time for named to reload the zone. The <time-to-refresh> directive is the numerical value slave servers use to determine how long to wait before asking the master nameserver if any changes have been made to the zone. The <serial-number> directive is a numerical value used by the slave servers to determine if it is using outdated zone data and should therefore refresh it. The <time-to-retry> directive is a numerical value used by slave servers to determine the length of time to wait before issuing a refresh request in the event the master nameserver is not answering. If the master has not replied to a refresh request before the amount of time specified in the <time-to-expire> directive elapses, the slave servers stop responding as an authority for requests concerning that namespace. The <minimum-TTL> directive is the quantity of time other nameservers cache the zone's information. When configuring BIND, all times are specified in seconds. However, it is possible to use abbreviations when specifying units of time other than seconds, such as minutes ( M ), hours ( H ), days ( D ), and weeks ( W ). The table in Table 12.1, "Seconds compared to other time units" shows an amount of time in seconds and the equivalent time in another format. Table 12.1. Seconds compared to other time units Seconds Other Time Units 60 1M 1800 30M 3600 1H 10800 3H 21600 6H 43200 12H 86400 1D 259200 3D 604800 1W 31536000 365D The following example illustrates the form an SOA resource record might take when it is populated with real values.
[ "<host> IN A <IP-address>", "IN A 10.0.1.3 server1 IN A 10.0.1.5", "<alias-name> IN CNAME <real-name>", "server1 IN A 10.0.1.5 www IN CNAME server1", "IN MX <preference-value> <email-server-name>", "IN MX 10 mail.example.com. IN MX 20 mail2.example.com.", "IN NS <nameserver-name>", "IN NS dns1.example.com. IN NS dns2.example.com.", "@ IN SOA <primary-name-server> <hostmaster-email> ( <serial-number> <time-to-refresh> <time-to-retry> <time-to-expire> <minimum-TTL> )", "@ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-bind-zone-rr
Chapter 13. Compiler and Tools
Chapter 13. Compiler and Tools System Information Gatherer and Reporter (SIGAR) The System Information Gatherer and Reporter (SIGAR) is a library and command-line tool for accessing operating system and hardware level information across multiple platforms and programming languages. In Red Hat Enterprise Linux 6.4 and later, SIGAR is considered a Technology Preview package. Package: sigar
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/chap-red_hat_enterprise_linux-6.10_technical_notes-technology_previews-compiler_and_tools
Chapter 16. Certificate Profiles Configuration
Chapter 16. Certificate Profiles Configuration 16.1. Creating and Editing Certificate Profiles Directly on the File System As part of the installation process of a CA, the certificate enrollment profiles can be modified directly on the file system by modifying the profiles' configuration files. Default files exist for the default profiles at installation; when new profiles are needed, new profile configuration files are to be created. The configuration files are stored in the CA profile directory, instance_directory /ca/profiles/ca/ , such as /var/lib/pki/pki-ca/ca/profiles/ca/ . The file is named profile_name .cfg . All of the parameters for profile rules can be set or modified in those profile configuration files. Profile rules can be inputs, outputs, authentication, authorization, defaults, and constraints. The enrollment profiles for the CA certificates are located in the /var/lib/pki/instance_name/ca/conf directory with the name *.profile . Note For audit reasons, use this method only during the CA installation prior to deployment. Restart the server after editing the profile configuration file for the changes to take effect. Section 16.1.1.1, "Profile Configuration Parameters" Section 16.1.1.2, "Modifying Certificate Extensions Directly on the File System" Section 16.1.1.3, "Adding Profile Inputs Directly on the File System" 16.1.1. Configuring non-CA System Certificate Profiles 16.1.1.1. Profile Configuration Parameters All of the parameters for a profile rule - defaults, inputs, outputs, and constraints - are configured within a single policy set. A policy set for a profile has the name policyset. policyName.policyNumber . For example: The common profile configuration parameters are described in Table 16.1, "Profile Configuration File Parameters" . Table 16.1. Profile Configuration File Parameters Parameter Description desc Gives a free text description of the certificate profile, which is shown on the end-entities page. For example, desc=This certificate profile is for enrolling server certificates with agent authentication. enable Sets whether the profile is enabled, and therefore accessible through the end-entities page. For example, enable=true . auth.instance_id Sets which authentication manager plug-in to use to authenticate the certificate request submitted through the profile. For automatic enrollment, the CA issues a certificate immediately if the authentication is successful. If authentication fails or there is no authentication plug-in specified, the request is queued to be manually approved by an agent. For example, auth.instance_id=CMCAuth . The authentication method must be one of the registered authentication instances from CS.cfg . authz.acl Specifies the authorization constraint. Most commonly, this us used to set the group evaluation ACL. For example, this caCMCUserCert parameter requires that the signer of the CMC request belong to the Certificate Manager Agents group: authz.acl=group="Certificate Manager Agents" In directory-based user certificate renewal, this option is used to ensure that the original requester and the currently-authenticated user are the same. An entity must authenticate (bind or, essentially, log into the system) before authorization can be evaluated. The authorization method specified must be one of the registered authorization instances from CS.cfg . name Gives the name of the profile. For example, name=Agent-Authenticated Server Certificate Enrollment . This name is displayed in the end users enrollment or renewal page. input.list Lists the allowed inputs for the profile by name. For example, input.list=i1,i2 . input. input_id .class_id Gives the java class name for the input by input ID (the name of the input listed in input.list ). For example, input.i1.class_id=cmcCertReqInputImpl . output.list Lists the possible output formats for the profile by name. For example, output.list=o1 . output. output_id .class_id Gives the java class name for the output format named in output.list . For example, output.o1.class_id=certOutputImpl . policyset.list Lists the configured profile rules. For dual certificates, one set of rules applies to the signing key and the other to the encryption key. Single certificates use only one set of profile rules. For example, policyset.list=serverCertSet . policyset. policyset_id .list Lists the policies within the policy set configured for the profile by policy ID number in the order in which they should be evaluated. For example, policyset.serverCertSet.list=1,2,3,4,5,6,7,8 . policyset. policyset_id.policy_number. constraint.class_id Gives the java class name of the constraint plug-in set for the default configured in the profile rule. For example, policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl . policyset. policyset_id.policy_number. constraint.name Gives the user-defined name of the constraint. For example, policyset.serverCertSet.1.constraint.name=Subject Name Constraint . policyset. policyset_id.policy_number. constraint.params. attribute Specifies a value for an allowed attribute for the constraint. The possible attributes vary depending on the type of constraint. For example, policyset.serverCertSet.1.constraint.params.pattern=CN=.* . policyset. policyset_id.policy_number. default.class_id Gives the java class name for the default set in the profile rule. For example, policyset.serverCertSet.1.default.class_id=userSubjectNameDefaultImpl policyset. policyset_id.policy_number. default.name Gives the user-defined name of the default. For example, policyset.serverCertSet.1.default.name=Subject Name Default policyset. policyset_id.policy_number. default.params. attribute Specifies a value for an allowed attribute for the default. The possible attributes vary depending on the type of default. For example, policyset.serverCertSet.1.default.params.name=CN=(Name)USDrequest.requestor_nameUSD . 16.1.1.2. Modifying Certificate Extensions Directly on the File System Changing constraints changes the restrictions on the type of information which can be supplied. Changing the defaults and constraints can also add, delete, or modify the extensions which are accepted or required from a certificate request. For example, the default caFullCMCUserCert profile is set to create a Key Usage extension from information in the request. The default is updated to allow user-supplied key extensions: This sets the server to accept the extension OID 2.5.29.15 in the certificate request. Other constraints and defaults can be changed similarly. Make sure that any required constraints are included with appropriate defaults, that defaults are changed when a different constraint is required, and that only allowed constraints are used with the default. For more information, see the Defaults Reference section and the Constraints Reference section in the Red Hat Certificate System Administration Guide . 16.1.1.2.1. Key Usage and Extended Key Usage Consistency Red Hat Certificate System provides a flexible infrastructure for administrators to create customized enrollment profiles to meet the requirements of their environment. However, it is important that profiles do not allow issuing certificates that violate the requirements defined in RFC 5280. When creating an enrollment profile where both Key Usage (KU) and Extended Key Usage (EKU) extensions are present, it is important to make sure that the consistency between the two extensions is maintained as per section 4.2.1.12. Extended Key Usage of RFC 5280. For details about the KU extension, see: The following table provides the guidelines that maps consistent Key Usage bits to the Extended Key Usage Extension for each purpose: Purpose / Extended Key Usages Key Usages TLS Server Authentication command id-kp-serverAuth digitalSignature , keyEncipherment , or KeyAgreement TLS Client (Mutual) Authentication id-kp-clientAuth digitalSignature , keyEncipherment , and/or KeyAgreement Code Signing id-kp-codeSigning digitalSignature Email Protection id-kp-emailProtection digitalSignature , nonRepudiation , and/or ( keyEncipherment or keyAgreement ) OCSP Response Signing id-kp-OCSPSigning KeyAgreement and/or nonRepudiation The following shows two examples of inconsistent EKU/KU: An enrollment profile that is intended for purpose of OCSP response signing contains Extended key usage id-kp-OCSPSigning but with keyEncipherment key usage bit: An enrollment profile that is intended for the purpose of TLS server authentication contains Extended key usage id-kp-serverAuth but with CRL signing key usage bit: The Extended Key Usage Extension Constraint section in the Red Hat Certificate System Administration Guide . The keyUsage section in the Red Hat Certificate System Administration Guide . For details about the EKU extension, see: The Extended Key Usage Extension Constraint section in the Red Hat Certificate System Administration Guide . The extKeyUsage section in the Red Hat Certificate System Administration Guide . 16.1.1.2.2. Configuring Cross-Pair Profiles Cross-pair certificates are distinct CA signing certificates that establish a trust partner relationship whereby entities from these two distinct PKIs will trust each other. Both partner CAs store the other CA signing certificate in its database, so all of the certificates issued within the other PKI are trusted and recognized. Two extensions supported by the Certificate System can be used to establish such a trust partner relationship (cross-certification): The Certificate Policies Extension ( CertificatePoliciesExtension ) specifies the terms that the certificate fall under, which is often unique for each PKI. The Policy Mapping Extension ( PolicyMappingExtension ) seals the trust between two PKI's by mapping the certificate profiles of the two environments. Issuing cross-pair certificates requires the Certificate Policies Extension, explained in certificatePoliciesExt annex in the Red Hat Certificate System Administration Guide . To ensure that the issued certificate contains the CertificatePoliciesExtension, the enrollment profile needs to include an appropriate policy rule, for example: Certificates issued with the enrollment profile in this example would contain the following information: For more information on using cross-pair certificates, see the Using Cross-Pair Certificates section in the Red Hat Certificate System Administration Guide . For more information on publishing cross-pair certificates, see the see the Publishing Cross-Pair Certificates section in the Red Hat Certificate System Administration Guide . 16.1.1.3. Adding Profile Inputs Directly on the File System The certificate profile configuration file in the CA's profiles/ca directory contains the input information for that particular certificate profile form. Inputs are the fields in the end-entities page enrollment forms. There is a parameter, input.list , which lists the inputs included in that profile. Other parameters define the inputs; these are identified by the format input. ID . For example, this adds a generic input to a profile: For more information on what inputs, or form fields, are available, see the Input Reference section in the Red Hat Certificate System Administration Guide . 16.1.2. Changing the Default Validity Time of Certificates In each profile on a Certificate Authority (CA), you can set how long certificates issued using a profile are valid. You can change this value for security reasons. For example, to set the validity of the generated Certificate Authority (CA) signing certificate to 825 days (approximately 27 months), open the /var/lib/pki/ instance_name /ca/profiles/ca/caCACert.cfg file in an editor and set: 16.1.3. Configuring CA System Certificate Profiles Unlike the non-CA subsystems, the enrollment profiles for CA's own system certificates are kept in the /var/lib/pki/[instance name]/ca/conf file. Those profiles are: caAuditSigningCert.profile eccAdminCert.profile rsaAdminCert.profile caCert.profile eccServerCert.profile saServerCert.profile caOCSPCert.profile eccSubsystemCert.profile rsaSubsystemCert.profile If you wish to change the default values in the profiles above, make changes to the profiles before Section 7.7.6, "Starting the Configuration Step" procedure is performed. The following is an example that demonstrates: How to change validity to CA signing certificate. How to add extensions (e.g. Certificate policies extension). Back up the original CA certificate profile used by pkispawn . Open the CA certificate profile used by the configuration wizard. Reset the validity period in the Validity Default to whatever you want. For example, to change the period to two years: Add any extensions by creating a new default entry in the profile and adding it to the list. For example, to add the Certificate Policies Extension, add the default (which, in this example, is default #9): Then, add the default number to the list of defaults to use the new default: 16.1.4. Managing Smart Card CA Profiles Note Features in this section on TMS are not tested in the evaluation. This section is for reference only. The TPS does not generate or approve certificate requests; it sends any requests approved through the Enterprise Security Client to the configured CA to issue the certificate. This means that the CA actually contains the profiles to use for tokens and smart cards. The profiles to use can be automatically assigned, based on the card type. The profile configuration files are in the /var/lib/pki/ instance_name /profiles/ca/ directory with the other CA profiles. The default profiles are listed in Table 16.2, "Default Token Certificate Profiles" . Table 16.2. Default Token Certificate Profiles Profile Name Configuration File Description Regular Enrollment Profiles Token Device Key Enrollment caTokenDeviceKeyEnrollment.cfg For enrolling tokens used for devices or servers. Token User Encryption Certificate Enrollment caTokenUserEncryptionKeyEnrollment.cfg For enrolling encryption certificates on the token for a user. Token User Signing Certificate Enrollment caTokenUserSigningKeyEnrollment.cfg For enrolling signing certificates on the token for a user. Token User MS Login Certificate Enrollment caTokenMSLoginEnrollment.cfg For enrolling user certificates to use for single sign-on to a Windows domain or PC. Temporary Token Profiles Temporary Device Certificate Enrollment caTempTokenDeviceKeyEnrollment.cfg For enrolling certificates for a device on a temporary token. Temporary Token User Encryption Certificate Enrollment caTempTokenUserEncryptionKeyEnrollment.cfg For enrolling an encryption certificate on a temporary token for a user. Temporary Token User Signing Certificate Enrollment caTempTokenUserSigningKeyEnrollment.cfg For enrolling a signing certificates on a temporary token for a user. Renewal Profiles [a] Token User Encryption Certificate Enrollment (Renewal) caTokenUserEncryptionKeyRenewal.cfg For renewing encryption certificates on the token for a user, if renewal is allowed. Token User Signing Certificate Enrollment (Renewal) caTokenUserSigningKeyRenewal.cfg For renewing signing certificates on the token for a user, if renewal is allowed. [a] Renewal profiles can only be used in conjunction with the profile that issued the original certificate. There are two settings that are beneficial: It is important the original enrollment profile name does not change. The Renew Grace Period Constraint should be set in the original enrollment profile. This defines the amount of time before and after the certificate's expiration date when the user is allowed to renew the certificate. There are only a few examples of these in the default profiles, and they are mostly not enabled by default. 16.1.4.1. Editing Enrollment Profiles for the TPS Administrators have the ability to customize the default smart card enrollment profiles, used with the TPS. For instance, a profile could be edited to include the user's email address in the Subject Alternative Name extension. The email address for the user is retrieved from the authentication directory. To configure the CA for LDAP access, change the following parameters in the profile files, with the appropriate directory information: These CA profiles come with LDAP lookup disabled by default. The ldapStringAttributes parameter tells the CA which LDAP attributes to retrieve from the company directory. For example, if the directory contains uid as an LDAP attribute name, and this will be used in the subject name of the certificate, then uid must be listed in the ldapStringAttributes parameter, and request.uid listed as one of the components in the dnpattern . Editing certificate profiles is covered in the Setting up Certificate Profiles section in the Red Hat Certificate System Administration Guide . The format for the dnpattern parameter is covered in the Subject Name Constraint section and the Subject Name Default section in the Red Hat Certificate System Administration Guide . 16.1.4.2. Creating Custom TPS Profiles Certificate profiles are created as normal in the CA, but they also have to be configured in the TPS for it to be available for token enrollments. Note New profiles are added with new releases of Red Hat Certificate System. If an instance is migrated to Certificate System 10.0, then the new profiles need to be added to the migrated instance as if they are custom profiles. Create a new token profile for the issuing CA. Setting up profiles is covered in the Setting up Certificate Profiles section in the Red Hat Certificate System Administration Guide . Copy the profile into the CA's profiles directory, /var/lib/ instance_name /ca/profiles/ca/ . Edit the CA's CS.cfg file, and add the new profile references and the profile name to the CA's list of profiles. For example: Edit the TPS CS.cfg file, and add a line to point to the new CA enrollment profile. For example: Restart the instance after editing the smart card profiles: If the CA and TPS are in separate instances, restart both instances. Note Enrollment profiles for the External Registration ( externalReg ) setting are configured in the user LDAP entry. 16.1.4.3. Using the Windows Smart Card Logon Profile The TPS uses a profile to generate certificates to use for single sign-on to a Windows domain or PC; this is the Token User MS Login Certificate Enrollment profile ( caTokenMSLoginEnrollment.cfg ). However, there are some special considerations that administrators must account for when configuring Windows smart card login. Issue a certificate to the domain controller, if it is not already configured for TLS. Configure the smart card login per user, rather than as a global policy, to prevent locking out the domain administrator. Enable CRL publishing to the Active Directory server because the domain controller checks the CRL at every login. 16.1.5. Disabling Certificate Enrolment Profiles This section provides instructions on how to disable selected profiles. To disable a certificate profile, edit the corresponding *.cfg file in the /var/lib/pki/ instance_name /ca/profiles/ca/ directory and set the visible and enable parameters to false . For example, to disable all non-CMC profiles: List all non-CMC profiles: In each of the displayed files, set the following parameters to false : Additionally, set visible=false in all CMC profiles to make them invisible on the end entity page: List all CMC profiles: In each of the displayed files, set:
[ "policyset.cmcUserCertSet.6.constraint.class_id=noConstraintImpl policyset.cmcUserCertSet.6.constraint.name=No Constraint policyset.cmcUserCertSet.6.default.class_id=userExtensionDefaultImpl policyset.cmcUserCertSet.6.default.name=User Supplied Key Default policyset.cmcUserCertSet.6.default.params.userExtOID=2.5.29.15", "policyset.cmcUserCertSet.6.constraint.class_id=keyUsageExtConstraintImpl policyset.cmcUserCertSet.6.constraint.name=Key Usage Extension Constraint policyset.cmcUserCertSet.6.constraint.params.keyUsageCritical=true policyset.cmcUserCertSet.6.constraint.params.keyUsageCrlSign=false policyset.cmcUserCertSet.6.constraint.params.keyUsageDataEncipherment=false policyset.cmcUserCertSet.6.constraint.params.keyUsageDecipherOnly=false policyset.cmcUserCertSet.6.constraint.params.keyUsageDigitalSignature=true policyset.cmcUserCertSet.6.constraint.params.keyUsageEncipherOnly=false policyset.cmcUserCertSet.6.constraint.params.keyUsageKeyAgreement=false policyset.cmcUserCertSet.6.constraint.params.keyUsageKeyCertSign=false policyset.cmcUserCertSet.6.constraint.params.keyUsageKeyEncipherment=true policyset.cmcUserCertSet.6.constraint.params.keyUsageNonRepudiation=true policyset.cmcUserCertSet.6.default.class_id=keyUsageExtDefaultImpl policyset.cmcUserCertSet.6.default.name=Key Usage Default policyset.cmcUserCertSet.6.default.params.keyUsageCritical=true policyset.cmcUserCertSet.6.default.params.keyUsageCrlSign=false policyset.cmcUserCertSet.6.default.params.keyUsageDataEncipherment=false policyset.cmcUserCertSet.6.default.params.keyUsageDecipherOnly=false policyset.cmcUserCertSet.6.default.params.keyUsageDigitalSignature=true policyset.cmcUserCertSet.6.default.params.keyUsageEncipherOnly=false policyset.cmcUserCertSet.6.default.params.keyUsageKeyAgreement=false policyset.cmcUserCertSet.6.default.params.keyUsageKeyCertSign=false policyset.cmcUserCertSet.6.default.params.keyUsageKeyEncipherment=true policyset.cmcUserCertSet.6.default.params.keyUsageNonRepudiation=true", "policyset.cmcUserCertSet.6.default.class_id=userExtensionDefaultImpl policyset.cmcUserCertSet.6.default.name=User Supplied Key Default policyset.cmcUserCertSet.6.default.params.userExtOID=2.5.29.15", "policyset.ocspCertSet.6.default.class_id=keyUsageExtDefaultImpl policyset.ocspCertSet..6.default.name=Key Usage Default policyset.ocspCertSet..6.default.params.keyUsageCritical=true policyset.ocspCertSet..6.default.params.keyUsageCrlSign=false policyset.ocspCertSet..6.default.params.keyUsageDataEncipherment=false policyset.ocspCertSet..6.default.params.keyUsageDecipherOnly=false policyset.ocspCertSet..6.default.params.keyUsageDigitalSignature=true policyset.ocspCertSet..6.default.params.keyUsageEncipherOnly=false policyset.ocspCertSet..6.default.params.keyUsageKeyAgreement=false policyset.ocspCertSet..6.default.params.keyUsageKeyCertSign=false policyset.ocspCertSet..6.default.params.keyUsageKeyEncipherment=true policyset.ocspCertSet..6.default.params.keyUsageNonRepudiation=true policyset.ocspCertSet.7.constraint.params.exKeyUsageOIDs=1.3.6.1.5.5.7.3.9 policyset.ocspCertSet.7.default.class_id=extendedKeyUsageExtDefaultImpl policyset.ocspCertSet.7.default.name=Extended Key Usage Default policyset.ocspCertSet.7.default.params.exKeyUsageCritical=false policyset.ocspCertSet.7.default.params.exKeyUsageOIDs=1.3.6.1.5.5.7.3.9", "policyset.serverCertSet.6.default.name=Key Usage Default policyset.serverCertSet.6.default.params.keyUsageCritical=true policyset.serverCertSet.6.default.params.keyUsageDigitalSignature=true policyset.serverCertSet.6.default.params.keyUsageNonRepudiation=false policyset.serverCertSet.6.default.params.keyUsageDataEncipherment=true policyset.serverCertSet.6.default.params.keyUsageKeyEncipherment=false policyset.serverCertSet.6.default.params.keyUsageKeyAgreement=true policyset.serverCertSet.6.default.params.keyUsageKeyCertSign=false policyset.serverCertSet.6.default.params.keyUsageCrlSign=true policyset.serverCertSet.6.default.params.keyUsageEncipherOnly=false policyset.serverCertSet.6.default.params.keyUsageDecipherOnly=false policyset.cmcUserCertSet.7.default.class_id=extendedKeyUsageExtDefaultImpl policyset.cmcUserCertSet.7.default.name=Extended Key Usage Extension Default policyset.cmcUserCertSet.7.default.params.exKeyUsageCritical=false policyset.serverCertSet.7.default.params.exKeyUsageOIDs=1.3.6.1.5.5.7.3.1", "policyset.userCertSet.p7.constraint.class_id=noConstraintImpl policyset.userCertSet.p7.constraint.name=No Constraint policyset.userCertSet.p7.default.class_id=certificatePoliciesExtDefaultImpl policyset.userCertSet.p7.default.name=Certificate Policies Extension Default policyset.userCertSet.p7.default.params.Critical=false policyset.userCertSet.p7.default.params.PoliciesExt.num=1 policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.enable=true policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.policyId=1.1.1.1 policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.CPSURI.enable=false policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.CPSURI.value= policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.enable=false policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.explicitText.value= policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.noticeReference.noticeNumbers= policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.noticeReference.organization=", "Identifier: Certificate Policies: - 2.5.29.32 Critical: no Certificate Policies: Policy Identifier: 1.1.1.1", "input.list=i1,i2,i3,i4 input.i4.class_id=genericInputImpl input.i4.params.gi_display_name0=Name0 input.i4.params.gi_display_name1=Name1 input.i4.params.gi_display_name2=Name2 input.i4.params.gi_display_name3=Name3 input.i4.params.gi_param_enable0=true input.i4.params.gi_param_enable1=true input.i4.params.gi_param_enable2=true input.i4.params.gi_param_enable3=true input.i4.params.gi_param_name0=gname0 input.i4.params.gi_param_name1=gname1 input.i4.params.gi_param_name2=gname2 input.i4.params.gi_param_name3=gname3 input.i4.params.gi_num=4", "policyset.caCertSet.2.default.params.range=825", "cp -p /usr/share/pki/ca/conf/caCert.profile /usr/share/pki/ca/conf/caCert.profile.orig", "vim /usr/share/pki/ca/conf/caCert.profile", "2.default.class=com.netscape.cms.profile.def.ValidityDefault 2.default.name=Validity Default 2.default.params.range=7200", "9.default.class_id=certificatePoliciesExtDefaultImpl 9.default.name=Certificate Policies Extension Default 9.default.params.Critical=false 9.default.params.PoliciesExt.certPolicy0.enable=false 9.default.params.PoliciesExt.certPolicy0.policyId= 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.CPSURI.enable=true 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.CPSURI.value=CertificatePolicies.example.com 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.enable=false 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.explicitText.value= 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.noticeReference.noticeNumbers= 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.noticeReference.organization=", "list=2,4,5,6,7,8, 9", "policyset.set1.p1.default.params.dnpattern=UID=USDrequest.uidUSD, O=Token Key User policyset.set1.p1.default.params.ldap.enable=true policyset.set1.p1.default.params.ldap.basedn=ou=people,dc=host,dc=example,dc=com policyset.set1.p1.default.params.ldapStringAttributes=uid,mail policyset.set1.p1.default.params.ldap.ldapconn.host=localhost.example.com policyset.set1.p1.default.params.ldap.ldapconn.port=389", "vim etc/pki/ instance_name /ca/CS.cfg profile.list=caUserCert,...,caManualRenewal, tpsExampleEnrollProfile profile.caTokenMSLoginEnrollment.class_id=caUserCertEnrollImpl profile.caTokenMSLoginEnrollment.config=/var/lib/pki/ instance_name /profiles/ca/tpsExampleEnrollProfile.cfg", "vim /etc/pki/ instance_name /tps/CS.cfg op.enroll.userKey.keyGen.signing.ca.profileId=tpsExampleEnrollProfile", "systemctl restart pki-tomcatd-nuxwdog@ instance_name .service", "ls -l /var/lib/pki/ instance_name /ca/profiles/ca/ | grep -v \"CMC\"", "visible=false enable=false", "ls -l /var/lib/pki/ instance_name /ca/profiles/ca/*CMC*", "visible=false" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/certificate_profiles_configuration
6.4. JAR Deployment
6.4. JAR Deployment For development time or quick deployment you can deploy the translator JAR using the Management CLI or AdminShell or Management Console. When you deploy dependencies in JAR form to Red Hat JBoss Data Virtualization, Java libraries and any other third-party libraries must be defined under META-INF/MANIFEST.MF file. Example 6.2. Example MANIFEST.mf file The following example is the /META-INF/MANIFEST.mf file provided in the Loopback translator JAR file, EAP_HOME/modules/system/layers/dv/org/jboss/teiid/translator/loopback/main/translator-loopback-[VERSION].jar .
[ "Manifest-Version: 1.0 Bnd-LastModified: 1516984498575 Build-Jdk: 1.7.0_85 Build-Timestamp: Fri, 26 Jan 2018 11:34:13 -0500 Built-By: mockbuild Bundle-Description: Loopback Translator Bundle-DocURL: http://www.jboss.org/ Bundle-License: http://www.gnu.org/licenses/lgpl.html Bundle-ManifestVersion: 2 Bundle-Name: Loopback Translator Bundle-SymbolicName: org.jboss.teiid.connectors.translator-loopback Bundle-Vendor: JBoss by Red Hat Bundle-Version: 8.12.11.6_4-redhat-64-12 Created-By: Apache Maven Bundle Plugin Export-Package: org.teiid.translator.loopback;uses:=\"org.teiid.language, org.teiid.metadata,org.teiid.translator\";version=\"8.12.11\" Implementation-Title: Loopback Translator Implementation-URL: http://www.jboss.org/teiid/connectors/translator-loo pback Implementation-Vendor: JBoss by Red Hat Implementation-Vendor-Id: org.jboss.teiid.connectors Implementation-Version: 8.12.11.6_4-redhat-64-12 Import-Package: org.teiid.core.util;version=\"[8.12,9)\",org.teiid.languag e;version=\"[8.12,9)\",org.teiid.logging;version=\"[8.12,9)\",org.teiid.met adata;version=\"[8.12,9)\",org.teiid.translator,org.teiid.translator.jdbc .teiid;version=\"[8.12,9)\" Java-Vendor: Oracle Corporation Java-Version: 1.7.0_85 Os-Arch: amd64 Os-Name: Linux Os-Version: 2.6.32-696.18.7.el6.x86_64 Require-Capability: osgi.ee;filter:=\"(&(osgi.ee=JavaSE)(version=1.6))\" Scm-Connection: scm:git:git://github.com/teiid/teiid.git/connectors/tran slator-loopback Scm-Revision: 01b968a220e981ee820ab1b07df148833eb8b995 Scm-Url: http://github.com/teiid/teiid/connectors/translator-loopback Specification-Title: Loopback Translator Specification-Vendor: JBoss by Red Hat Specification-Version: 8.12.11.6_4-redhat-64-12 Tool: Bnd-2.3.0.201405100607" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/jar_deployment
13.3.3. LDAP Client Applications
13.3.3. LDAP Client Applications There are graphical LDAP clients available which support creating and modifying directories, but they are not included with Red Hat Enterprise Linux. One such application is LDAP Browser/Editor - A Java-based tool available online at http://www.iit.edu/~gawojar/ldap/ . Most other LDAP clients access directories as read-only, using them to reference, but not alter, organization-wide information. Some examples of such applications are Sendmail, Mozilla , Gnome Meeting , and Evolution .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ldap-applications
Chapter 8. Configuring the Messaging Transports
Chapter 8. Configuring the Messaging Transports This section describes the concepts critical to understanding JBoss EAP messaging transports, specifically connectors and acceptors. Acceptors are used on the server to define how it can accept connections, while connectors are used by the client to define how it connects to a server. Each concept is discussed in turn and then a practical example shows how clients can make connections to a JBoss EAP messaging server, using JNDI or the Core API. 8.1. Acceptor and Connector Types There are three main types of acceptor and connector defined in the configuration of JBoss EAP. in-vm : In-vm is short for Intra Virtual Machine. Use this connector type when both the client and the server are running in the same JVM, for example, Message Driven Beans (MDBs) running in the same instance of JBoss EAP. http : Used when client and server are running in different JVMs. Uses the undertow subsystem's default port of 8080 and is thus able to multiplex messaging communications over HTTP. Red Hat recommends using the http connector when the client and server are running in different JVMs due to considerations such as port management, especially in a cloud environment. remote : Remote transports are Netty-based components used for native TCP communication when the client and server are running in different JVMs. An alternative to http when it cannot be used. A client must use a connector that is compatible with one of the server's acceptors. For example, only an in-vm-connector can connect to an in-vm-acceptor , and only a http-connector can connect to an http-acceptor , and so on. You can have the management CLI list the attributes for a given acceptor or connector type using the read-children-attributes operation. For example, to see the attributes of all the http-connectors for the default messaging server you would enter: The attributes of all the http-acceptors are read using a similar command: The other acceptor and connector types follow the same syntax. Just provide child-type with the acceptor or connector type, for example, remote-connector or in-vm-acceptor . 8.2. Acceptors An acceptor defines which types of connection are accepted by the JBoss EAP integrated messaging server. You can define any number of acceptors per server. The sample configuration below is modified from the default full-ha configuration profile and provides an example of each acceptor type. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <http-acceptor name="http-acceptor" http-listener="default"/> <remote-acceptor name="legacy-messaging-acceptor" socket-binding="legacy-messaging"/> <in-vm-acceptor name="in-vm" server-id="0"/> ... </server> </subsystem> In the above configuration, the http-acceptor is using Undertow's default http-listener which listens on JBoss EAP's default http port, 8080. The http-listener is defined in the undertow subsystem: <subsystem xmlns="urn:jboss:domain:undertow:10.0"> ... <server name="default-server"> <http-listener name="default" redirect-socket="https" socket-binding="http"/> ... </server> ... </subsystem> Also note how the remote-acceptor above uses the socket-binding named legacy-messaging , which is defined later in the configuration as part of the server's default socket-binding-group . <server xmlns="urn:jboss:domain:8.0"> ... <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> ... <socket-binding name="legacy-messaging" port="5445"/> ... </socket-binding-group> </server> In this example, the legacy-messaging socket-binding binds JBoss EAP to port 5445 , and the remote-acceptor above claims the port on behalf of the messaging-activemq subsystem for use by legacy clients. Lastly, the in-vm-acceptor uses a unique value for the server-id attribute so that this server instance can be distinguished from other servers that might be running in the same JVM. 8.3. Connectors A connector defines how to connect to an integrated JBoss EAP messaging server, and is used by a client to make connections. You might wonder why connectors are defined on the server when they are actually used by the client. The reasons for this include: In some instances, the server might act as a client when it connects to another server. For example, one server might act as a bridge to another, or it might want to participate in a cluster. In such cases, the server needs to know how to connect to other servers, and that is defined by connectors. A server can provide connectors using a ConnectionFactory which is looked up by clients using JNDI, so creating connection to the server is simpler. You can define any number of connectors per server. The sample configuration below is based on the full-ha configuration profile and includes connectors of each type. <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> ... <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="http" server-name="messaging-server-1"/> <remote-connector name="legacy-remoting-connector" socket-binding="legacy-remoting"/> <in-vm-connector name="in-vm" server-id="0"/> ... </server> </subsystem> Like the http-acceptor from the full-ha profile, the http-connector uses the default http-listener defined by the undertow subsystem. The endpoint attribute declares which http-acceptor to connect to. In this case, the connector will connect to the default http-acceptor . JBoss EAP 7.1 introduced a new server-name attribute for the http-connector . This new attribute is optional, but it is required to be able to connect to the correct http-acceptor on a remote server that is running more than one ActiveMQ Artemis instance. If this attribute is not defined, the value is resolved at runtime to be the name of the parent ActiveMQ Artemis server in which the connector is defined. Also, note that the remote-connector references the same socket-binding as its remote-acceptor counterpart. Lastly, the in-vm-connector uses the same value for server-id as the in-vm-acceptor since they both run inside the same server instance. Note If the bind address for the public interface is set to 0.0.0.0 , you will see the following warning in the log when you start the JBoss EAP server: This is because a remote connector cannot connect to a server using the 0.0.0.0 address and the messaging-activemq subsystem tries to replace it with the server's host name. The administrator should configure the remote connector to use a different interface address for the socket binding. 8.4. Configuring Acceptors and Connectors There are a number of configuration options for connectors and acceptors. They appear in the configuration as child <param> elements. Each <param> element includes a name and value attribute pair that is understood and used by the default Netty-based factory class responsible for instantiating a connector or acceptor. In the management CLI, each remote connector or acceptor element includes an internal map of the parameter name and value pairs. For example, to add a new param to a remote-connector named myRemote use the following command: Retrieve parameter values using a similar syntax. You can also include parameters when you create an acceptor or connector, as in the example below. Table 8.1. Transport Configuration Properties Property Description batch-delay Before writing packets to the transport, the messaging server can be configured to batch up writes for a maximum of batch-delay in milliseconds. This increases the overall throughput for very small messages by increasing average latency for message transfer. The default is 0. direct-deliver When a message arrives on the server and is delivered to waiting consumers, by default, the delivery is done on the same thread on which the message arrived. This gives good latency in environments with relatively small messages and a small number of consumers but reduces the throughput and latency. For highest throughput you can set this property as false . The default is true . http-upgrade-enabled Used by an http-connector to specify that it is using HTTP upgrade and therefore is multiplexing messaging traffic over HTTP. This property is set automatically by JBoss EAP to true when the http-connector is created and does not require an administrator. http-upgrade-endpoint Specifies the http-acceptor on the server-side to which the http-connector will connect. The connector will be multiplexed over HTTP and needs this info to find the relevant http-acceptor after the HTTP upgrade. This property is set automatically by JBoss EAP when the http-connector is created and does not require an administrator. local-address For a http or a remote connector, this is used to specify the local address which the client will use when connecting to the remote address. If a local address is not specified then the connector will use any available local address. local-port For a http or a remote connector, this is used to specify which local port the client will use when connecting to the remote address. If the local-port default is used (0) then the connector will let the system pick up an ephemeral port. Valid port values are 0 to 65535 . nio-remoting-threads If configured to use NIO, the messaging will by default use a number of threads equal to three times the number of cores (or hyper-threads) as reported by Runtime.getRuntime().availableProcessors() for processing incoming packets. To override this value, you can set a custom value for the number of threads. The default is -1 . tcp-no-delay If this is true then Nagle's algorithm will be enabled. This algorithm helps improve the efficiency of TCP/IP networks by reducing the number of packets sent over a network. The default is true . tcp-send-buffer-size This parameter determines the size of the TCP send buffer in bytes. The default is 32768 . tcp-receive-buffer-size This parameter determines the size of the TCP receive buffer in bytes. The default is 32768 . use-nio-global-worker-pool This parameter will ensure all Jakarta Messaging connections share a single pool of Java threads, rather than each connection having its own pool. This serves to avoid exhausting the maximum number of processes on the operating system. The default is true . 8.5. Connecting to a Server If you want to connect a client to a server, you have to have a proper connector. There are two ways to do that. You could use a ConnectionFactory which is configured on the server and can be obtained via JNDI lookup. Alternatively, you could use the ActiveMQ Artemis core API and configure the whole ConnectionFactory on the client side. 8.5.1. Jakarta Messaging Connection Factories Clients can use JNDI to look up ConnectionFactory objects which provide connections to the server. Connection Factories can expose each of the three types of connector: A connection-factory referencing a remote-connector can be used by a remote client to send messages to or receive messages from the server (assuming the connection-factory has an appropriately exported entry). A remote-connector is associated with a socket-binding that tells the client using the connection-factory where to connect. A connection-factory referencing an in-vm-connector is suitable to be used by a local client to either send messages to or receive messages from a local server. An in-vm-connector is associated with a server-id which tells the client using the connection-factory where to connect, since multiple messaging servers can run in a single JVM. A connection-factory referencing a http-connector is suitable to be used by a remote client to send messages to or receive messages from the server by connecting to its HTTP port before upgrading to the messaging protocol. A http-connector is associated with the socket-binding that represents the HTTP socket, which by default is named http . Since Jakarta Messaging 2.0, a default Jakarta Messaging connection factory is accessible to Jakarta EE applications under the JNDI name java:comp/DefaultJMSConnectionFactory . The messaging-activemq subsystem defines a pooled-connection-factory that is used to provide this default connection factory. Below are the default connectors and connection factories that are included in the full configuration profile for JBoss EAP: <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0"> <server name="default"> [...] <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor" /> <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput"> <param name="batch-delay" value="50"/> </http-connector> <in-vm-connector name="in-vm" server-id="0"/> [...] <connection-factory name="InVmConnectionFactory" connectors="in-vm" entries="java:/ConnectionFactory" /> <pooled-connection-factory name="activemq-ra" transaction="xa" connectors="in-vm" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory"/> [...] </server> </subsystem> The entries attribute of a factory specifies the JNDI names under which the factory will be exposed. Only JNDI names bound in the java:jboss/exported namespace are available to remote clients. If a connection-factory has an entry bound in the java:jboss/exported namespace a remote client would look-up the connection-factory using the text after java:jboss/exported . For example, the RemoteConnectionFactory is bound by default to java:jboss/exported/jms/RemoteConnectionFactory which means a remote client would look-up this connection-factory using jms/RemoteConnectionFactory . A pooled-connection-factory should not have any entry bound in the java:jboss/exported namespace because a pooled-connection-factory is not suitable for remote clients. 8.5.2. Connecting to the Server Using JNDI If the client resides within the same JVM as the server, it can use the in-vm connector provided by the InVmConnectionFactory . Here is how the InvmConnectionFactory is typically configured, as found for example in standalone-full.xml . <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm" /> Note the value of the entries attribute. Clients using the InVmConnectionFactory should drop the leading java:/ during lookup, as in the following example: InitialContext ctx = new InitialContext(); ConnectionFactory cf = (ConnectionFactory)ctx.lookup("ConnectionFactory"); Connection connection = cf.createConnection(); Remote clients use the RemoteConnectionFactory , which is usually configured as below: <connection-factory name="RemoteConnectionFactory" scheduled-thread-pool-max-size="10" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/> Remote clients should ignore the leading java:jboss/exported/ of the value for entries , following the example of the code snippet below: final Properties env = new Properties(); env.put(Context.INITIAL_CONTEXT_FACTORY, "org.wildfly.naming.client.WildFlyInitialContextFactory"); env.put(Context.PROVIDER_URL, "http-remoting://remotehost:8080"); InitialContext remotingCtx = new InitialContext(env); ConnectionFactory cf = (ConnectionFactory) remotingCtx.lookup("jms/RemoteConnectionFactory"); Note the value for the PROVIDER_URL property and how the client is using the JBoss EAP http-remoting protocol. Note also how the client is using the org.wildfly.naming.client.WildFlyInitialContextFactory , which implies the client has this class and its encompassing client JAR somewhere in the classpath. For maven projects, this can be achieved by including the following dependency: <dependencies> <dependency> <groupId>org.wildfly</groupId> <artifactId>wildfly-jms-client-bom</artifactId> <type>pom</type> </dependency> </dependencies> 8.5.3. Connecting to the Server Using the Core API You can use the Core API to make client connections without needing a JNDI lookup. Clients using the Core API require a client JAR in their classpath, just as JNDI-based clients. ServerLocator Clients use ServerLocator instances to create ClientSessionFactory instances. As their name implies, ServerLocator instances are used to locate servers and create connections to them. In Jakarta Messaging terms think of a ServerLocator in the same way you would a Jakarta Messaging Connection Factory. ServerLocator instances are created using the ActiveMQClient factory class. ServerLocator locator = ActiveMQClient.createServerLocatorWithoutHA(new TransportConfiguration(InVMConnectorFactory.class.getName())); ClientSessionFactory Clients use a ClientSessionFactory to create ClientSession instances, which are basically connections to a server. In Jakarta Messaging terms think of them as Jakarta Messaging connections. ClientSessionFactory instances are created using the ServerLocator class. ClientSessionFactory factory = locator.createClientSessionFactory(); ClientSession A client uses a ClientSession for consuming and producing messages and for grouping them in transactions. ClientSession instances can support both transactional and non transactional semantics and also provide an XAResource interface so messaging operations can be performed as part of the Jakarta Transactions operation. ClientSession instances group ClientConsumers and ClientProducers . ClientSession session = factory.createSession(); The simple example below highlights some of what was just discussed: ServerLocator locator = ActiveMQClient.createServerLocatorWithoutHA( new TransportConfiguration( InVMConnectorFactory.class.getName())); // In this simple example, we just use one session for both // producing and consuming ClientSessionFactory factory = locator.createClientSessionFactory(); ClientSession session = factory.createSession(); // A producer is associated with an address ... ClientProducer producer = session.createProducer("example"); ClientMessage message = session.createMessage(true); message.getBodyBuffer().writeString("Hello"); // We need a queue attached to the address ... session.createQueue("example", "example", true); // And a consumer attached to the queue ... ClientConsumer consumer = session.createConsumer("example"); // Once we have a queue, we can send the message ... producer.send(message); // We need to start the session before we can -receive- messages ... session.start(); ClientMessage msgReceived = consumer.receive(); System.out.println("message = " + msgReceived.getBodyBuffer().readString()); session.close(); 8.6. Messaging through a load balancer When using JBoss EAP as a load balancer, clients can call messaging servers behind either a static Undertow HTTP load balancer, or behind a mod_cluster load balancer. Configurations to support messaging clients calling messaging servers through a static load balancer must meet the following requirements: When using JBoss EAP as a load balancer, you must configure the load balancer using HTTP or HTTPS. AJP is not supported for messaging load balancers. For details about configuring Undertow as a static load balancer, see Configure Undertow as a Static Load Balancer in the JBoss EAP Configuration Guide . If JNDI lookups occur on the messaging servers behind the load balancer, you must configure the back-end messaging workers . Clients connecting to the load balancer must reuse the initial connections to the load balancer to ensure they communicate with the same server. Clients connecting to a load balancer must not use the cluster topology to connect to the load balancer. Using the cluster topology might result in messages being sent to a different server, which might result in disruptions to transaction processing. For details about configuring Undertow as a load balancer using mod_cluster, Configure Undertow as a Load Balancer Using mod_cluster in the JBoss EAP Configuration Guide . Configuration of messaging clients to communicate through a load balancer Clients that connect to a load balancer must be configured to re-use the initial connection rather than using the cluster topology to connect to the load balancer. Re-using the initial connection ensures that the client connects to the same server. Using the cluster topology might result in messages being directed to a different server, which might result in disruptions to transaction processing. A connection factory or pooled connection factory that is used to connect to a load balancer must be configured with the attribute use-topology-for-load-balancing set to false. The following example illustrates how to define this configuration in the CLI. Configuring back-end workers You must configure back-end messaging workers only if you plan to do JNDI lookups behind the load balancer. Create a new outbound socket binding that points to the load-balancing server. Create an HTTP connector that references the load-balancing server socket binding. Add the HTTP connector to the connection factory used by the client. Make sure you configure the clients to re-use the initial connection:
[ "/subsystem=messaging-activemq/server=default:read-children-resources(child-type=http-connector,include-runtime=true)", "/subsystem=messaging-activemq/server=default:read-children-resources(child-type=http-acceptor,include-runtime=true)", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <http-acceptor name=\"http-acceptor\" http-listener=\"default\"/> <remote-acceptor name=\"legacy-messaging-acceptor\" socket-binding=\"legacy-messaging\"/> <in-vm-acceptor name=\"in-vm\" server-id=\"0\"/> </server> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\"> <server name=\"default-server\"> <http-listener name=\"default\" redirect-socket=\"https\" socket-binding=\"http\"/> </server> </subsystem>", "<server xmlns=\"urn:jboss:domain:8.0\"> <socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"legacy-messaging\" port=\"5445\"/> </socket-binding-group> </server>", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> <http-connector name=\"http-connector\" endpoint=\"http-acceptor\" socket-binding=\"http\" server-name=\"messaging-server-1\"/> <remote-connector name=\"legacy-remoting-connector\" socket-binding=\"legacy-remoting\"/> <in-vm-connector name=\"in-vm\" server-id=\"0\"/> </server> </subsystem>", "AMQ121005: Invalid \"host\" value \"0.0.0.0\" detected for \"connector\" connector. Switching to <HOST_NAME>. If this new address is incorrect please manually configure the connector to use the proper one.", "/subsystem=messaging-activemq/server=default/remote-connector=myRemote:map-put(name=params,key=foo,value=bar)", "/subsystem=messaging-activemq/server=default/remote-connector=myRemote:map-get(name=params,key=foo) { \"outcome\" => \"success\", \"result\" => \"bar\" }", "/subsystem=messaging-activemq/server=default/remote-connector=myRemote:add(socket-binding=mysocket,params={foo=bar,foo2=bar2})", "<subsystem xmlns=\"urn:jboss:domain:messaging-activemq:4.0\"> <server name=\"default\"> [...] <http-connector name=\"http-connector\" socket-binding=\"http\" endpoint=\"http-acceptor\" /> <http-connector name=\"http-connector-throughput\" socket-binding=\"http\" endpoint=\"http-acceptor-throughput\"> <param name=\"batch-delay\" value=\"50\"/> </http-connector> <in-vm-connector name=\"in-vm\" server-id=\"0\"/> [...] <connection-factory name=\"InVmConnectionFactory\" connectors=\"in-vm\" entries=\"java:/ConnectionFactory\" /> <pooled-connection-factory name=\"activemq-ra\" transaction=\"xa\" connectors=\"in-vm\" entries=\"java:/JmsXA java:jboss/DefaultJMSConnectionFactory\"/> [...] </server> </subsystem>", "<connection-factory name=\"InVmConnectionFactory\" entries=\"java:/ConnectionFactory\" connectors=\"in-vm\" />", "InitialContext ctx = new InitialContext(); ConnectionFactory cf = (ConnectionFactory)ctx.lookup(\"ConnectionFactory\"); Connection connection = cf.createConnection();", "<connection-factory name=\"RemoteConnectionFactory\" scheduled-thread-pool-max-size=\"10\" entries=\"java:jboss/exported/jms/RemoteConnectionFactory\" connectors=\"http-connector\"/>", "final Properties env = new Properties(); env.put(Context.INITIAL_CONTEXT_FACTORY, \"org.wildfly.naming.client.WildFlyInitialContextFactory\"); env.put(Context.PROVIDER_URL, \"http-remoting://remotehost:8080\"); InitialContext remotingCtx = new InitialContext(env); ConnectionFactory cf = (ConnectionFactory) remotingCtx.lookup(\"jms/RemoteConnectionFactory\");", "<dependencies> <dependency> <groupId>org.wildfly</groupId> <artifactId>wildfly-jms-client-bom</artifactId> <type>pom</type> </dependency> </dependencies>", "ServerLocator locator = ActiveMQClient.createServerLocatorWithoutHA(new TransportConfiguration(InVMConnectorFactory.class.getName()));", "ClientSessionFactory factory = locator.createClientSessionFactory();", "ClientSession session = factory.createSession();", "ServerLocator locator = ActiveMQClient.createServerLocatorWithoutHA( new TransportConfiguration( InVMConnectorFactory.class.getName())); // In this simple example, we just use one session for both // producing and consuming ClientSessionFactory factory = locator.createClientSessionFactory(); ClientSession session = factory.createSession(); // A producer is associated with an address ClientProducer producer = session.createProducer(\"example\"); ClientMessage message = session.createMessage(true); message.getBodyBuffer().writeString(\"Hello\"); // We need a queue attached to the address session.createQueue(\"example\", \"example\", true); // And a consumer attached to the queue ClientConsumer consumer = session.createConsumer(\"example\"); // Once we have a queue, we can send the message producer.send(message); // We need to start the session before we can -receive- messages session.start(); ClientMessage msgReceived = consumer.receive(); System.out.println(\"message = \" + msgReceived.getBodyBuffer().readString()); session.close();", "/subsystem=messaging-activemq/pooled-connection-factory=remote-artemis:write-attribute(name=use-topology-for-load-balancing, value=false)", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=balancer-binding:add(host=load_balance.example.com,port=8080)", "/subsystem=messaging-activemq/server=default/http-connector=balancer-connector:add(socket-binding=balancer-binding, endpoint=http-acceptor)", "/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=connectors,value=[balancer-connector])", "/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:write-attribute(name=use-topology-for-load-balancing,value=false)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/acceptors_and_connectors
Telemetry
Telemetry Red Hat Advanced Cluster Security for Kubernetes 4.7 Understanding telemetry Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/telemetry/index
4.3. Adding Hosts
4.3. Adding Hosts Each diskless client must have its own snapshot directory on the NFS server that is used as its read/write file system. The Network Booting Tool can be used to create these snapshot directories. After completing the steps in Section 4.2, "Finish Configuring the Diskless Environment" , a window appears to allow hosts to be added for the diskless environment. Click the New button. In the dialog shown in Figure 4.1, "Add Diskless Host" , provide the following information: Hostname or IP Address/Subnet - Specify the hostname or IP address of a system to add it as a host for the diskless environment. Enter a subnet to specify a group of systems. Operating System - Select the diskless environment for the host or subnet of hosts. Serial Console - Select this checkbox to perform a serial installation. Snapshot name - Provide a subdirectory name to be used to store all of the read/write content for the host. Ethernet - Select the Ethernet device on the host to use to mount the diskless environment. If the host only has one Ethernet card, select eth0 . Ignore the Kickstart File option. It is only used for PXE installations. Figure 4.1. Add Diskless Host In the existing snapshot/ directory in the diskless directory, a subdirectory is created with the Snapshot name specified as the file name. Then, all of the files listed in snapshot/files and snapshot/files.custom are copied copy from the root/ directory to this new directory.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Diskless_Environments-Adding_Hosts
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. Important Ansible plug-ins for Red Hat Developer Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/developing_ansible_automation_content/providing-feedback
Chapter 26. Managing Certificates and Certificate Authorities
Chapter 26. Managing Certificates and Certificate Authorities 26.1. Lightweight Sub-CAs If your IdM installation is configured with the integrated Certificate System (CS) certificate authority (CA), you are able to create lightweight sub-CAs. They enable you to configure services, like virtual private network (VPN) gateways, to accept only certificates issued by one sub-CA. At the same time, you can configure other services to accept only certificates issued by a different sub-CA or the root CA. If you revoke the intermediate certificate of a sub-CA, all certificates issued by this sub-CA are automatically invalid. If you set up IdM using the integrated CA, the automatically created ipa CA is the root CA of the certificate system. All sub-CAs you create, are subordinated to this root CA. 26.1.1. Creating a Lightweight Sub-CA For details on creating a sub-CA, see the section called "Creating a Sub-CA from the Web UI" the section called "Creating a Sub-CA from the Command Line" Creating a Sub-CA from the Web UI To create a new sub-CA named vpn-ca : Open the Authentication tab, and select the Certificates subtab. Select Certificate Authorities and click Add . Enter the name and subject DN for the CA. Figure 26.1. Adding a CA The subject DN must be unique in the IdM CA infrastructure. Creating a Sub-CA from the Command Line To create a new sub-CA named vpn-ca , enter: Name Name of the CA. Authority ID Automatically created, individual ID for the CA. Subject DN Subject distinguished name (DN). The subject DN must be unique in the IdM CA infrastructure. Issuer DN Parent CA that issued the sub-CA certificate. All sub-CAs are created as a child of the IdM root CA. To verify that the new CA signing certificate has been successfully added to the IdM database, run: Note The new CA certificate is automatically transferred to all replicas when they have a certificate system instance installed. 26.1.2. Removing a Lightweight Sub-CA For details on deleting a sub-CA, see the section called "Removing a Sub-CA from the Web UI" the section called "Removing a Sub-CA from the Command Line" Removing a Sub-CA from the Web UI Open the Authentication tab, and select the Certificates subtab. Select Certificate Authorities . Select the sub-CA to remove and click Delete . Click Delete to confirm. Removing a Sub-CA from the Command Line To delete a sub-CA, enter:
[ "ipa ca-add vpn-ca --subject=\" CN=VPN,O=IDM.EXAMPLE.COM \" ------------------- Created CA \"vpn-ca\" ------------------- Name: vpn-ca Authority ID: ba83f324-5e50-4114-b109-acca05d6f1dc Subject DN: CN=VPN,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IDM.EXAMPLE.COM", "certutil -d /etc/pki/pki-tomcat/alias/ -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI caSigningCert cert-pki-ca CTu,Cu,Cu Server-Cert cert-pki-ca u,u,u auditSigningCert cert-pki-ca u,u,Pu caSigningCert cert-pki-ca ba83f324-5e50-4114-b109-acca05d6f1dc u,u,u ocspSigningCert cert-pki-ca u,u,u subsystemCert cert-pki-ca u,u,u", "ipa ca-del vpn-ca ------------------- Deleted CA \"vpn-ca\" -------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/config-certificates
23.15.3. Create Software RAID
23.15.3. Create Software RAID Note On System z, the storage subsystem uses RAID transparently. There is no need to set up a software RAID. Redundant arrays of independent disks (RAIDs) are constructed from multiple storage devices that are arranged to provide increased performance and - in some configurations - greater fault tolerance. Refer to the Red Hat Enterprise Linux Storage Administration Guide for a description of different kinds of RAIDs. To make a RAID device, you must first create software RAID partitions. Once you have created two or more software RAID partitions, select RAID to join the software RAID partitions into a RAID device. RAID Partition Choose this option to configure a partition for software RAID. This option is the only choice available if your disk contains no software RAID partitions. This is the same dialog that appears when you add a standard partition - refer to Section 23.15.2, "Adding Partitions" for a description of the available options. Note, however, that File System Type must be set to software RAID Figure 23.40. Create a software RAID partition RAID Device Choose this option to construct a RAID device from two or more existing software RAID partitions. This option is available if two or more software RAID partitions have been configured. Figure 23.41. Create a RAID device Select the file system type as for a standard partition. Anaconda automatically suggests a name for the RAID device, but you can manually select names from md0 to md15 . Click the checkboxes beside individual storage devices to include or remove them from this RAID. The RAID Level corresponds to a particular type of RAID. Choose from the following options: RAID 0 - distributes data across multiple storage devices. Level 0 RAIDs offer increased performance over standard partitions, and can be used to pool the storage of multiple devices into one large virtual device. Note that Level 0 RAIDS offer no redundancy and that the failure of one device in the array destroys the entire array. RAID 0 requires at least two RAID partitions. RAID 1 - mirrors the data on one storage device onto one or more other storage devices. Additional devices in the array provide increasing levels of redundancy. RAID 1 requires at least two RAID partitions. RAID 4 - distributes data across multiple storage devices, but uses one device in the array to store parity information that safeguards the array in case any device within the array fails. Because all parity information is stored on the one device, access to this device creates a bottleneck in the performance of the array. RAID 4 requires at least three RAID partitions. RAID 5 - distributes data and parity information across multiple storage devices. Level 5 RAIDs therefore offer the performance advantages of distributing data across multiple devices, but do not share the performance bottleneck of level 4 RAIDs because the parity information is also distributed through the array. RAID 5 requires at least three RAID partitions. RAID 6 - level 6 RAIDs are similar to level 5 RAIDs, but instead of storing only one set of parity data, they store two sets. RAID 6 requires at least four RAID partitions. RAID 10 - level 10 RAIDs are nested RAIDs or hybrid RAIDs . Level 10 RAIDs are constructed by distributing data over mirrored sets of storage devices. For example, a level 10 RAID constructed from four RAID partitions consists of two pairs of partitions in which one partition mirrors the other. Data is then distributed across both pairs of storage devices, as in a level 0 RAID. RAID 10 requires at least four RAID partitions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/Create_Software_RAID-s390
Chapter 1. Key features
Chapter 1. Key features AMQ Streams simplifies the process of running Apache Kafka in an OpenShift cluster. This guide is intended as a starting point for building an understanding of AMQ Streams. The guide introduces some of the key concepts behind Kafka, which is central to AMQ Streams, explaining briefly the purpose of Kafka components. Configuration points are outlined, including options to secure and monitor Kafka. A distribution of AMQ Streams provides the files to deploy and manage a Kafka cluster, as well as example files for configuration and monitoring of your deployment. A typical Kafka deployment is described, as well as the tools used to deploy and manage Kafka. 1.1. Kafka capabilities The underlying data stream-processing capabilities and component architecture of Kafka can deliver: Microservices and other applications to share data with extremely high throughput and low latency Message ordering guarantees Message rewind/replay from data storage to reconstruct an application state Message compaction to remove old records when using a key-value log Horizontal scalability in a cluster configuration Replication of data to control fault tolerance Retention of high volumes of data for immediate access 1.2. Kafka use cases Kafka's capabilities make it suitable for: Event-driven architectures Event sourcing to capture changes to the state of an application as a log of events Message brokering Website activity tracking Operational monitoring through metrics Log collection and aggregation Commit logs for distributed systems Stream processing so that applications can respond to data in real time 1.3. How AMQ Streams supports Kafka AMQ Streams provides container images and Operators for running Kafka on OpenShift. AMQ Streams Operators are fundamental to the running of AMQ Streams. The Operators provided with AMQ Streams are purpose-built with specialist operational knowledge to effectively manage Kafka. Operators simplify the process of: Deploying and running Kafka clusters Deploying and running Kafka components Configuring access to Kafka Securing access to Kafka Upgrading Kafka Managing brokers Creating and managing topics Creating and managing users
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/amq_streams_on_openshift_overview/key-features_str
Getting Started with Red Hat build of Apache Camel for Spring Boot
Getting Started with Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel 4.0
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/getting_started_with_red_hat_build_of_apache_camel_for_spring_boot/index
Managing and allocating storage resources
Managing and allocating storage resources Red Hat OpenShift Data Foundation 4.18 Instructions on how to allocate storage to core services and hosted applications in OpenShift Data Foundation, including snapshot and clone. Red Hat Storage Documentation Team Abstract This document explains how to allocate storage to core services and hosted applications in Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Overview Read this document to understand how to create, configure, and allocate storage to core services or hosted applications in Red Hat OpenShift Data Foundation. Chapter 2, Storage classes shows you how to create custom storage classes. Chapter 5, Block pools provides you with information on how to create, update and delete block pools. Chapter 6, Configure storage for OpenShift Container Platform services shows you how to use OpenShift Data Foundation for core OpenShift Container Platform services. Chapter 8, Backing OpenShift Container Platform applications with OpenShift Data Foundation provides information about how to configure OpenShift Container Platform applications to use OpenShift Data Foundation. Adding file and object storage to an existing external OpenShift Data Foundation cluster Chapter 10, How to use dedicated worker nodes for Red Hat OpenShift Data Foundation provides information about how to use dedicated worker nodes for Red Hat OpenShift Data Foundation. Chapter 11, Managing Persistent Volume Claims provides information about managing Persistent Volume Claim requests, and automating the fulfillment of those requests. Chapter 12, Reclaiming space on target volumes shows you how to reclaim the actual available storage space. Chapter 14, Volume Snapshots shows you how to create, restore, and delete volume snapshots. Chapter 15, Volume cloning shows you how to create volume clones. Chapter 16, Managing container storage interface (CSI) component placements provides information about setting tolerations to bring up container storage interface component on the nodes. Chapter 2. Storage classes The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications. Note Custom storage classes are not supported for external mode OpenShift Data Foundation clusters. 2.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 2.2. Storage class with single replica You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level. Warning Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All applications can lose their data, and must be recreated in case of a failed OSD. Procedure Enable the single replica feature using the following command: Verify storagecluster is in Ready state: Example output: New cephblockpools are created for each failure domain. Verify cephblockpools are in Ready state: Example output: Verify new storage classes have been created: Example output: New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in Running state: Example output: 2.2.1. Recovering after OSD lost from single replica When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss. Procedure Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is. If you know which failure domain the failing OSD is in, run the following command to get the exact replica1-pool-name required for the steps. If you do not know where the failing OSD is, skip to step 2. Example output: Copy the corresponding failure domain name for use in steps, then skip to step 4. Find the OSD pod that is in Error state or CrashLoopBackoff state to find the failing OSD: Identify the replica-1 pool that had the failed OSD. Identify the node where the failed OSD was running: Identify the failureDomainLabel for the node where the failed OSD was running: The output shows the replica-1 pool name whose OSD is failing, for example: where USDfailure_domain_value is the failureDomainName. Delete the replica-1 pool. Connect to the toolbox pod: Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example: Replace replica1-pool-name with the failure domain name identified earlier. Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide. Restart the rook-ceph operator: Recreate any affected applications in that avaialbity zone to start using the new pool with same name. Chapter 3. Persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. Persistent volume encryption is only available for RBD PVs. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault and Thales CipherTrust Manager. You can create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. You need to configure access to the KMS before creating the storage class. Note For PV encryption, you must have a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . 3.1. Access configuration for Key Management System (KMS) Based on your use case, you need to configure access to KMS using one of the following ways: Using vaulttokens : allows users to authenticate using a token Using Thales CipherTrust Manager : uses Key Management Interoperability Protocol (KMIP) Using vaulttenantsa (Technology Preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 3.1.1. Configuring access to KMS using vaulttokens Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Procedure Create a secret in the tenant's namespace. In the OpenShift Container Platform web console, navigate to Workloads -> Secrets . Click Create -> Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. 3.1.2. Configuring access to KMS using Thales CipherTrust Manager Prerequisites Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token be navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both meta-data and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Procedure To create a key to act as the Key Encryption Key (KEK) for storageclass encryption, follow the steps below: Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. 3.1.3. Configuring access to KMS using vaulttenantsa Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: Procedure You need to configure the Kubernetes authentication method before OpenShift Data Foundation can authenticate with and start using Vault . The following instructions create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault . Apply the following YAML to your Openshift cluster: Create a secret for serviceaccount token and CA certificate. Get the token and the CA certificate from the secret. Retrieve the OpenShift cluster endpoint. Use the information collected in the steps to set up the kubernetes authentication method in Vault as shown: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the OpenShift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . Sample YAML To create a storageclass that uses the vaulttenantsa method for PV encryption, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType Set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress The hostname or IP address of the vault server with the port number. vaultTLSServerName (Optional) The vault TLS server name vaultAuthPath (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath The backend path in Vault where the encryption keys will be stored vaultCAFromSecret The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. 3.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault: Ensure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage -> Storage Classes . Click the Storage class name -> YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads -> ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) -> Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 3.2.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads -> ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: After the yaml is edited, click on Create . 3.3. Enabling and disabling key rotation when using KMS Security common practices require periodic encryption of key rotation. You can enable or disable key rotation when using KMS. 3.3.1. Enabling key rotation To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to PersistentVolumeClaims , Namespace , or StorageClass (in the decreasing order of precedence). <value> can be @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 3.3.2. Disabling key rotation You can disable key rotation for the following: All the persistent volume claims (PVCs) of storage class A specific PVC Disabling key rotation for all PVCs of a storage class To disable key rotation for all PVCs, update the annotation of the storage class: Disabling key rotation for a specific persistent volume claim Identify the EncryptionKeyRotationCronJob CR for the PVC you want to disable key rotation on: Where <PVC_NAME> is the name of the PVC that you want to disable. Apply the following to the EncryptionKeyRotationCronJob CR from the step to disable the key rotation: Update the csiaddons.openshift.io/state annotation from managed to unmanaged : Where <encryptionkeyrotationcronjob_name> is the name of the EncryptionKeyRotationCronJob CR. Add suspend: true under the spec field: Save and exit. The key rotation will be disabled for the PVC. Chapter 4. Enabling and disabling encryption in-transit post deployment You can enable encryption in-transit for the existing clusters after the deployment of clusters both in internal and external modes. 4.1. Enabling encryption in-transit after deployment in internal mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Procedure Patch the storagecluster to add encryption enabled as true to the storage cluster spec: Check the configurations. Wait for around 10 minutes for ceph daemons to restart and then check the pods. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume." 4.2. Disabling encryption in-transit after deployment in internal mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Encryption in-transit is enabled. Procedure Patch the storagecluster to update encryption enabled as false in the storage cluster spec: Check the configurations. Wait for around 10 minutes for ceph daemons to restart and then check the pods. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume." 4.3. Enabling encryption in-transit after deployment in external mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Procedure Patch the storagecluster to add encryption enabled as true the storage cluster spec: Check the connection settings in the CR. 4.3.1. Applying encryption in-transit on Red Hat Ceph Storage cluster Procedure Apply Encryption in-transit settings. Check the settings. Restart all Ceph daemons. Wait for the restarting of all the daemons. 4.3.2. Remount existing volumes. Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume. 4.4. Disabling encryption in-transit after deployment in external mode Prerequisites OpenShift Data Foundation is deployed and a storage cluster is created. Encryption in-transit is enabled for the external mode cluster. Procedure Removing encryption in-transit settings from Red Hat Ceph Storage cluster Remove and check encryption in-transit configurations. Restart all Ceph daemons. Patching the CR Patch the storagecluster to update encryption enabled as false in the storage cluster spec: Check the configurations. Remount existing volumes Depending on your best practices for application maintenance, you can choose the best approach for your environment to remount or remap volumes. One way to remount is to delete the existing application pod and bring up another application pod to use the volume. Another option is to drain the nodes where the applications are running..This ensures that the volume is unmounted from the current pod and then mounted to a new pod, allowing for remapping or remounting of the volume. Chapter 5. Block pools The OpenShift Data Foundation operator installs a default set of storage pools depending on the platform in use. These default storage pools are owned and controlled by the operator and it cannot be deleted or modified. Note Multiple block pools are not supported for external mode OpenShift Data Foundation clusters. 5.1. Managing block pools in internal mode With OpenShift Container Platform, you can create multiple custom storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. 5.1.1. Creating a block pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the Storage pools tab. Click Create storage pool . Select Volume type as Block . Enter Pool name . Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Select Data protection policy as either 2-way Replication or 3-way Replication . Optional: Select Enable compression checkbox if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Create . 5.1.2. Updating an existing pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Storage pools . Click the Action Menu (...) at the end the pool you want to update. Click Edit storage pool . Modify the form details as follows: Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Change the Data protection policy to either 2-way Replication or 3-way Replication. Enable or disable the compression option. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Save . 5.1.3. Deleting a pool Use this procedure to delete a pool in OpenShift Data Foundation. Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure . Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the Storage pools tab. Click the Action Menu (...) at the end the pool you want to delete. Click Delete Storage Pool . Click Delete to confirm the removal of the Pool. Note A pool cannot be deleted when it is bound to a PVC. You must detach all the resources before performing this activity. Note When a pool is deleted, the underlying Ceph pool is not deleted. Chapter 6. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 6.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration -> Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) -> Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 6.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can also the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 6.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads -> Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 6.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads -> Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 6.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 6.3. Persistent Volume Claims attached to prometheus-k8s-* pod 6.4. Overprovision level policy control Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 6.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 6.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 6.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration -> Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 6.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload -> Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. Chapter 7. Creating Multus networks OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 7.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Requirements for Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. Note Network attachment definitions can only use the whereabouts IP address management (IPAM), and it must specify the range field. ipRanges and plugin chaining are not supported. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ). Chapter 8. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads -> Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads -> Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads -> Deployments . Click Workloads -> Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. Chapter 9. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.17 is installed and running on the OpenShift Container Platform version 4.17 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using one of the following methods, either CSV or ConfigMap. Important Downloading the ceph-external-cluster-details-exporter.py python script using CSV will no longer be supported from version OpenShift Data Foundation 4.19 and onward. Using the ConfigMap will be the only supported method. CSV ConfigMap Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads -> Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) -> Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. On the Overview -> Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads -> Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. Chapter 10. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 10.3, "Manual creation of infrastructure nodes" section for more information. 10.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 10.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 10.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 10.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute -> Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute -> Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . Chapter 11. Managing Persistent Volume Claims 11.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage -> StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) -> Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 11.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage -> Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 11.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Overview -> Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage -> Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 11.4. Expanding Persistent Volume Claims OpenShift Data Foundation 4.6 onwards has the ability to expand Persistent Volume Claims providing more flexibility in the management of persistent storage resources. Expansion is supported for the following Persistent Volumes: PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph File System (CephFS) for volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Block . PVC with ReadWriteOncePod (RWOP) that is based on Ceph File System (CephFS) or Network File System (NFS) for volume mode Filesystem . PVC with ReadWriteOncePod (RWOP) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . With RWOP access mode, you mount the volume as read-write by a single pod on a single node. Note PVC expansion is not supported for OSD, MON and encrypted PVCs. Prerequisites Administrator access to OpenShift Web Console. Procedure In OpenShift Web Console, navigate to Storage -> Persistent Volume Claims . Click the Action Menu (...) to the Persistent Volume Claim you want to expand. Click Expand PVC : Select the new size of the Persistent Volume Claim, then click Expand : To verify the expansion, navigate to the PVC's details page and verify the Capacity field has the correct size requested. Note When expanding PVCs based on Ceph RADOS Block Devices (RBDs), if the PVC is not already attached to a pod the Condition type is FileSystemResizePending in the PVC's details page. Once the volume is mounted, filesystem resize succeeds and the new size is reflected in the Capacity field. 11.5. Dynamic provisioning 11.5.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 11.5.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 11.5.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. Chapter 12. Reclaiming space on target volumes The deleted files or chunks of zero data sometimes take up storage space on the Ceph cluster resulting in inaccurate reporting of the available storage space. The reclaim space operation removes such discrepancies by executing the following operations on the target volume: fstrim - This operation is used on volumes that are in Filesystem mode and only if the volume is mounted to a pod at the time of execution of reclaim space operation. rbd sparsify - This operation is used when the volume is not attached to any pods and reclaims the space occupied by chunks of 4M-sized zeroed data. Note Only the Ceph RBD volumes support the reclaim space operation. The reclaim space operation involves a performance penalty when it is being executed. You can use one of the following methods to reclaim the space: Enabling reclaim space operation using annotating PersistentVolumeClaims (Recommended method to use for enabling reclaim space operation) Enabling reclaim space operation using ReclaimSpaceJob Enabling reclaim space operation using ReclaimSpaceCronJob 12.1. Enabling reclaim space operation by annotating PersistentVolumeClaims Use this procedure to automatically invoke the reclaim space operation to annotate persistent volume claim (PVC) based on a given schedule. Note The schedule value is in the same format as the Kubernetes CronJobs which sets the and/or interval of the recurring operation request. Recommended schedule interval is @weekly . If the schedule interval value is empty or in an invalid format, then the default schedule value is set to @weekly . Do not schedule multiple ReclaimSpace operations @weekly or at the same time. Minimum supported interval between each scheduled operation is at least 24 hours. For example, @daily (At 00:00 every day) or 0 3 * * * (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when the workload input/output is expected to be low. ReclaimSpaceCronJob is recreated when the schedule is modified. It is automatically deleted when the annotation is removed. Procedure Get the PVC details. Add annotation reclaimspace.csiaddons.openshift.io/schedule=@monthly to the PVC to create reclaimspacecronjob . Verify that reclaimspacecronjob is created in the format, "<pvc-name>-xxxxxxx" . Modify the schedule to run this job automatically. Verify that the schedule for reclaimspacecronjob has been modified. 12.2. Disabling reclaim space for a specific PersistentVolumeClaim To disable reclaim space for a specific PersistentVolumeClaim (PVC), modify the associated ReclaimSpaceCronJob custom resource (CR). Identify the ReclaimSpaceCronJob CR for the PVC you want to disable reclaim space on: Replace "<PVC_NAME>" with the name of the PVC. Apply the following to the ReclaimSpaceCronJob CR from step 1 to disable the reclaim space: Update the csiaddons.openshift.io/state annotation from "managed" to "unmanaged" Replace <RECLAIMSPACECRONJOB_NAME> with the ReclaimSpaceCronJob CR. Add suspend: true under the spec field: 12.3. Enabling reclaim space operation using ReclaimSpaceJob ReclaimSpaceJob is a namespaced custom resource (CR) designed to invoke reclaim space operation on the target volume. This is a one time method that immediately starts the reclaim space operation. You have to repeat the creation of ReclaimSpaceJob CR to repeat the reclaim space operation when required. Note Recommended interval between the reclaim space operations is weekly . Ensure that the minimum interval between each operation is at least 24 hours . Schedule the reclaim space operation during off-peak, maintenance window, or when the workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation: where, target Indicates the volume target on which the operation is performed. persistentVolumeClaim Name of the PersistentVolumeClaim . backOfflimit Specifies the maximum number of retries before marking the reclaim space operation as failed . The default value is 6 . The allowed maximum and minimum values are 60 and 0 respectively. retryDeadlineSeconds Specifies the duration in which the operation might retire in seconds and it is relative to the start time. The value must be a positive integer. The default value is 600 seconds and the allowed maximum value is 1800 seconds. timeout Specifies the timeout in seconds for the grpc request sent to the CSI driver. If the timeout value is not specified, it defaults to the value of global reclaimspace timeout. Minimum allowed value for timeout is 60. Delete the custom resource after completion of the operation. 12.4. Enabling reclaim space operation using ReclaimSpaceCronJob ReclaimSpaceCronJob invokes the reclaim space operation based on the given schedule such as daily, weekly, and so on. You have to create ReclaimSpaceCronJob only once for a persistent volume claim. The CSI-addons controller creates a ReclaimSpaceJob at the requested time and interval with the schedule attribute. Note Recommended schedule interval is @weekly . Minimum interval between each scheduled operation should be at least 24 hours. For example, @daily (At 00:00 every day) or "0 3 * * *" (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation where, concurrencyPolicy Describes the changes when a new ReclaimSpaceJob is scheduled by the ReclaimSpaceCronJob , while a ReclaimSpaceJob is still running. The default Forbid prevents starting a new job whereas Replace can be used to delete the running job potentially in a failure state and create a new one. failedJobsHistoryLimit Specifies the number of failed ReclaimSpaceJobs that are kept for troubleshooting. jobTemplate Specifies the ReclaimSpaceJob.spec structure that describes the details of the requested ReclaimSpaceJob operation. successfulJobsHistoryLimit Specifies the number of successful ReclaimSpaceJob operations. schedule Specifieds the and/or interval of the recurring operation request and it is in the same format as the Kubernetes CronJobs . Delete the ReclaimSpaceCronJob custom resource when execution of reclaim space operation is no longer needed or when the target PVC is deleted. 12.5. Customising timeouts required for Reclaim Space Operation Depending on the RBD volume size and its data pattern, Reclaim Space Operation might fail with the context deadline exceeded error. You can avoid this by increasing the timeout value. The following example shows the failed status by inspecting -o yaml of the corresponding ReclaimSpaceJob : Example You can also set custom timeouts at global level by creating the following configmap : Example Restart the csi-addons operator pod. All Reclaim Space Operations started after the above configmap creation use the customized timeout. ' :leveloffset: +1 Chapter 13. Finding and cleaning stale subvolumes (Technology Preview) Sometimes stale subvolumes don't have a respective k8s reference attached. These subvolumes are of no use and can be deleted. You can find and delete stale subvolumes using the ODF CLI tool. Important Deleting stale subvolumes using the ODF CLI tool is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Find the stale subvolumes by using the --stale flag with the subvolumes command: Example output: Delete the stale subvolumes: Replace <subvolumes> with a comma separated list of subvolumes from the output of the first command. The subvolumes must be of the same filesystem and subvolumegroup. Replace <filesystem> and <subvolumegroup> with the filesystem and subvolumegroup from the output of the first command. For example: Example output: Chapter 14. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. Volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. The OpenShift Data Foundation operator installs default volume snapshot classes depending on the platform in use. The operator owns and controls these default volume snapshot classes and they cannot be deleted or modified. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots. For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note Persistent Volume encryption now supports volume snapshots. 14.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) -> Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions -> Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage -> Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 14.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage -> Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 14.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) -> Delete Volume Snapshot . From Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) -> Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage -> Volume Snapshots and ensure that the deleted volume snapshot is not listed. Chapter 15. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 15.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) -> Clone PVC . Click on the PVC that you want to clone and click Actions -> Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. Chapter 16. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Note Ensure that all non-string values in the Tolerations value field has double quotation marks. For example, the values true which is of type boolean, and 1 which is of type int must be input as "true" and "1". Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes. Chapter 17. Using 2-way replication with CephFS To reduce storage overhead with CephFS when data resiliency is not a primary concern, you can opt for using 2-way replication (replica-2). This reduces the amount of storage space used and decreases the level of fault tolerance. There are two ways to use replica-2 for CephFS: Edit the existing default pool to replica-2 and use it with the default CephFS storageclass . Add an additional CephFS data pool with replica-2 . 17.1. Editing the existing default CephFS data pool to replica-2 Use this procedure to edit the existing default CephFS pool to replica-2 and use it with the default CephFS storageclass. Procedure Patch the storagecluster to change default CephFS data pool to replica-2. Check the pool details. 17.2. Adding an additional CephFS data pool with replica-2 Use this procedure to add an additional CephFS data pool with replica-2. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses -> Create Storage Class . Select CephFS Provisioner . Under Storage Pool , click Create new storage pool . Fill in the Create Storage Pool fields. Under Data protection policy , select 2-way Replication . Confirm Storage Pool creation In the Storage Class creation form, choose the newly created Storage Pool. Confirm the Storage Class creation. Verification Click Storage -> Data Foundation . In the Storage systems tab, select the new storage system. The Details tab of the storage system reflect the correct volume and device types you chose during creation Chapter 18. Creating exports using NFS This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster. Follow the instructions below to create exports and access them externally from the OpenShift Cluster: Section 18.1, "Enabling the NFS feature" Section 18.2, "Creating NFS exports" Section 18.3, "Consuming NFS exports in-cluster" Section 18.4, "Consuming NFS exports externally from the OpenShift cluster" 18.1. Enabling the NFS feature To use NFS feature, you need to enable it in the storage cluster using the command-line interface (CLI) after the cluster is created. You can also enable the NFS feature while creating the storage cluster using the user interface. Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. The OpenShift Data Foundation installation includes a CephFilesystem. Procedure Run the following command to enable the NFS feature from CLI: Verification steps NFS installation and configuration is complete when the following conditions are met: The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready . Check if all the csi-nfsplugin-* pods are running: Output has multiple pods. For example: 18.2. Creating NFS exports NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass. You can create NFS PVCs two ways: Create NFS PVC using a yaml. The following is an example PVC. Note volumeMode: Block will not work for NFS volumes. <desired_name> Specify a name for the PVC, for example, my-nfs-export . The export is created once the PVC reaches the Bound state. Create NFS PVCs from the OpenShift Container Platform web console. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster. Procedure In the OpenShift Web Console, click Storage -> Persistent Volume Claims Set the Project to openshift-storage . Click Create PersistentVolumeClaim . Specify Storage Class , ocs-storagecluster-ceph-nfs . Specify the PVC Name , for example, my-nfs-export . Select the required Access Mode . Specify a Size as per application requirement. Select Volume mode as Filesystem . Note: Block mode is not supported for NFS PVCs Click Create and wait until the PVC is in Bound status. 18.3. Consuming NFS exports in-cluster Kubernetes application pods can consume NFS exports created by mounting a previously created PVC. You can mount the PVC one of two ways: Using a YAML: Below is an example pod that uses the example PVC created in Section 18.2, "Creating NFS exports" : <pvc_name> Specify the PVC you have previously created, for example, my-nfs-export . Using the OpenShift Container Platform web console. Procedure On the OpenShift Container Platform web console, navigate to Workloads -> Pods . Click Create Pod to create a new application pod. Under the metadata section add a name. For example, nfs-export-example , with namespace as openshift-storage . Under the spec: section, add containers: section with image and volumeMounts sections: For example: Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod: For example: 18.4. Consuming NFS exports externally from the OpenShift cluster NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC. Procedure After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the step: For example: Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation. Replace <my-nfs> with the value you got in step 1. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the step. Get the share path from the PV. Get the name of the PV associated with the NFS export's PVC: Replace <pvc_name> with your own PVC name. For example: Use the PV name obtained previously to get the NFS export's share path: Get an ingress address for the NFS server. A service's ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com . Connect the external client using the share path and ingress address from the steps. The following example mounts the export to the client's directory path /export/mount/path : If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server. Chapter 19. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 20. Enabling faster client IO or recovery IO during OSD backfill During a maintenance window, you may want to favor either client IO or recovery IO. Favoring recovery IO over client IO will significantly reduce OSD recovery time. The valid recovery profile options are balanced , high_client_ops , and high_recovery_ops . Set the recovery profile using the following procedure. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Check the current recovery profile: Modify the recovery profile: Replace option with either balanced , high_client_ops , or high_recovery_ops . Verify the updated recovery profile: Chapter 21. Setting Ceph OSD full thresholds You can set Ceph OSD full thresholds using the ODF CLI tool or by updating the StorageCluster CR. 21.1. Setting Ceph OSD full thresholds using the ODF CLI tool You can set Ceph OSD full thresholds temporarily by using the ODF CLI tool. This is necessary in cases when the cluster gets into a full state and the thresholds need to be immediately increased. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Use the set command to adjust Ceph full thresholds. The set command supports the subcommands full , backfillfull , and nearfull . See the following examples for how to use each subcommand. full This subcommand allows updating the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, set Ceph OSD full ratio to 0.9 and then add capacity: For instructions to add capacity for you specific use case, see the Scaling storage guide . If OSDs continue to be in stuck , pending , or do not come up at all: Stop all IOs. Increase the full ratio to 0.92 : Wait for the cluster rebalance to happen. Once cluster rebalance is complete, change the full ratio back to its original value of 0.85: backfillfull This subcommand allows updating the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, to set backfillfull to 0.85 : nearfull This subcommand allows updating the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, to set nearfull to 0.8 : 21.2. Setting Ceph OSD full thresholds by updating the StorageCluster CR You can set Ceph OSD full thresholds by updating the StorageCluster CR. Use this procedure if you want to override the default settings. Procedure You can update the StorageCluster CR to change the settings for full , backfillfull , and nearfull . full Use this following command to update the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, to set Ceph OSD full ratio to 0.9 : backfillfull Use the following command to set the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, set backfill full to 0.85 : nearfull Use the following command to set the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, set nearfull to 0.8 :
[ "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephNonResilientPools/enable\", \"value\": true }]'", "oc get storagecluster", "NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.17.0", "oc get cephblockpools", "NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-east-1a Ready ocs-storagecluster-cephblockpool-us-east-1b Ready ocs-storagecluster-cephblockpool-us-east-1c Ready", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m", "oc get pods | grep osd", "rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m", "oc get cephblockpools", "NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-south-1 Ready ocs-storagecluster-cephblockpool-us-south-2 Ready ocs-storagecluster-cephblockpool-us-south-3 Ready", "oc get pods -n openshift-storage -l app=rook-ceph-osd | grep 'CrashLoopBackOff\\|Error'", "failed_osd_id=0 #replace with the ID of the failed OSD", "failure_domain_label=USD(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print USD2}')", "failure_domain_value=USD\"(oc get pods USDfailed_osd_id -oyaml |grep topology-location-zone |awk '{print USD2}')\"", "replica1-pool-name= \"ocs-storagecluster-cephblockpool-USDfailure_domain_value\"", "toolbox=USD(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}') rsh USDtoolbox -n openshift-storage", "ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it", "oc delete pod -l rook-ceph-operator -n openshift-storage", "cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF", "apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rbd-csi-vault-token-review-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: \"rbd-csi-vault-token-review\" type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>", "apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details", "encryptionKMSID: 1-vault", "kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }", "--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"", "oc get namespace default NAME STATUS AGE default Active 5d2h", "oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s", "oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated", "oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s", "oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h", "oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/enable: false\" storageclass.storage.k8s.io/rbd-sc annotated", "oc get encryptionkeyrotationcronjob -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'", "oc annotate encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true", "oc patch encryptionkeyrotationcronjob <encryptionkeyrotationcronjob_name> -p '{\"spec\": {\"suspend\": true}}' --type=merge.", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": true}}} }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched", "oc get storagecluster ocs-storagecluster -n openshift-storage -o yaml | yq '.spec.network' connections: encryption: enabled: true", "oc get pods -n openshift-storage | grep rook-ceph rook-ceph-crashcollector-ip-10-0-2-111.ec2.internal-796ffcm9kn9 1/1 Running 0 5m11s rook-ceph-crashcollector-ip-10-0-27-61.ec2.internal-854b4d8sk5z 1/1 Running 0 5m9s rook-ceph-crashcollector-ip-10-0-33-53.ec2.internal-589d9f4f8vx 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-2-111.ec2.internal-6d48cdc5fd-2tmsl 1/1 Running 0 5m9s rook-ceph-exporter-ip-10-0-27-61.ec2.internal-546c66c7cc-9lnpz 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-33-53.ec2.internal-b5555994c-x8mzz 1/1 Running 0 5m5s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7bd754f6vwps2 2/2 Running 0 4m56s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6cc5cc647c78m 2/2 Running 0 4m30s rook-ceph-mgr-a-6f8467578d-f8279 3/3 Running 0 3m40s rook-ceph-mgr-b-66754d99cf-9q58g 3/3 Running 0 3m27s rook-ceph-mon-a-75bc5dd655-tvdqf 2/2 Running 0 4m7s rook-ceph-mon-b-6b6d4d9b4c-tjbpz 2/2 Running 0 4m55s rook-ceph-mon-c-7456bb5f67-rtwpj 2/2 Running 0 4m32s rook-ceph-operator-7b5b9cdb9b-tvmb6 1/1 Running 0 45m rook-ceph-osd-0-b78dd99f6-n4wbm 2/2 Running 0 3m3s rook-ceph-osd-1-5887bf6d8d-2sncc 2/2 Running 0 2m39s rook-ceph-osd-2-784b59c4c8-44phh 2/2 Running 0 2m14s rook-ceph-osd-prepare-a075cf185c9b2e5d92ec3f7769565e38-ztrms 0/1 Completed 0 42m rook-ceph-osd-prepare-b4b48dc5e3bef99ab377e2a255a9142a-mvgnd 0/1 Completed 0 42m rook-ceph-osd-prepare-fae2ea2ad4aacbf62010ae5b60b87f57-6t9l5 0/1 Completed 0 42m", "oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 27m Ready 2024-11-06T16:15:26Z 4.18.0", "~ USD oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": false}}} }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched", "oc get storagecluster ocs-storagecluster -n openshift-storage -o yaml | yq '.spec.network' connections: encryption: enabled: false", "oc get pods -n openshift-storage | grep rook-ceph rook-ceph-crashcollector-ip-10-0-2-111.ec2.internal-796ffcm9kn9 1/1 Running 0 5m11s rook-ceph-crashcollector-ip-10-0-27-61.ec2.internal-854b4d8sk5z 1/1 Running 0 5m9s rook-ceph-crashcollector-ip-10-0-33-53.ec2.internal-589d9f4f8vx 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-2-111.ec2.internal-6d48cdc5fd-2tmsl 1/1 Running 0 5m9s rook-ceph-exporter-ip-10-0-27-61.ec2.internal-546c66c7cc-9lnpz 1/1 Running 0 5m7s rook-ceph-exporter-ip-10-0-33-53.ec2.internal-b5555994c-x8mzz 1/1 Running 0 5m5s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7bd754f6vwps2 2/2 Running 0 4m56s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-6cc5cc647c78m 2/2 Running 0 4m30s rook-ceph-mgr-a-6f8467578d-f8279 3/3 Running 0 3m40s rook-ceph-mgr-b-66754d99cf-9q58g 3/3 Running 0 3m27s rook-ceph-mon-a-75bc5dd655-tvdqf 2/2 Running 0 4m7s rook-ceph-mon-b-6b6d4d9b4c-tjbpz 2/2 Running 0 4m55s rook-ceph-mon-c-7456bb5f67-rtwpj 2/2 Running 0 4m32s rook-ceph-operator-7b5b9cdb9b-tvmb6 1/1 Running 0 45m rook-ceph-osd-0-b78dd99f6-n4wbm 2/2 Running 0 3m3s rook-ceph-osd-1-5887bf6d8d-2sncc 2/2 Running 0 2m39s rook-ceph-osd-2-784b59c4c8-44phh 2/2 Running 0 2m14s rook-ceph-osd-prepare-a075cf185c9b2e5d92ec3f7769565e38-ztrms 0/1 Completed 0 42m rook-ceph-osd-prepare-b4b48dc5e3bef99ab377e2a255a9142a-mvgnd 0/1 Completed 0 42m rook-ceph-osd-prepare-fae2ea2ad4aacbf62010ae5b60b87f57-6t9l5 0/1 Completed 0 42m", "oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 27m Ready 2024-11-06T16:15:26Z 4.18.0", "oc patch storagecluster ocs-external-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": true}}} }]' storagecluster.ocs.openshift.io/ocs-external-storagecluster patched", "get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 9h Ready true 2024-11-06T20:48:03Z 4.18.0", "oc get storagecluster ocs-external-storagecluster -o yaml | yq '.spec.network.connections' encryption: enabled: true", "root@ceph-client ~]# ceph config set global ms_client_mode secure ceph config set global ms_cluster_mode secure ceph config set global ms_service_mode secure ceph config set global rbd_default_map_options ms_mode=secure", "ceph config dump | grep ms_ ceph config dump | grep ms_ global basic ms_client_mode secure * global basic ms_cluster_mode secure * global basic ms_service_mode secure * global advanced rbd_default_map_options ms_mode=secure *", "ceph orch ls --format plain | tail -n +2 | awk '{print USD1}' | xargs -I {} ceph orch restart {} Scheduled to restart alertmanager.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-2 on host 'osd-2' Scheduled to restart ceph-exporter.osd-3 on host 'osd-3' Scheduled to restart ceph-exporter.osd-1 on host 'osd-1' Scheduled to restart crash.osd-0 on host 'osd-0' Scheduled to restart crash.osd-2 on host 'osd-2' Scheduled to restart crash.osd-3 on host 'osd-3' Scheduled to restart crash.osd-1 on host 'osd-1' Scheduled to restart grafana.osd-0 on host 'osd-0' Scheduled to restart mds.fsvol001.osd-0.lpciqk on host 'osd-0' Scheduled to restart mds.fsvol001.osd-2.wocnxz on host 'osd-2' Scheduled to restart mgr.osd-0.dtkyni on host 'osd-0' Scheduled to restart mgr.osd-2.kqcxwu on host 'osd-2' Scheduled to restart mon.osd-2 on host 'osd-2' Scheduled to restart mon.osd-3 on host 'osd-3' Scheduled to restart mon.osd-1 on host 'osd-1' Scheduled to restart node-exporter.osd-0 on host 'osd-0' Scheduled to restart node-exporter.osd-2 on host 'osd-2' Scheduled to restart node-exporter.osd-3 on host 'osd-3' Scheduled to restart node-exporter.osd-1 on host 'osd-1' Scheduled to restart osd.1 on host 'osd-0' Scheduled to restart osd.4 on host 'osd-0' Scheduled to restart osd.0 on host 'osd-2' Scheduled to restart osd.5 on host 'osd-2' Scheduled to restart osd.2 on host 'osd-3' Scheduled to restart osd.6 on host 'osd-3' Scheduled to restart osd.3 on host 'osd-1' Scheduled to restart osd.7 on host 'osd-1' Scheduled to restart prometheus.osd-0 on host 'osd-0' Scheduled to restart rgw.rgw.ssl.osd-1.smzpfj on host 'osd-1'", "ceph config rm global ms_client_mode ceph config rm global ms_cluster_mode ceph config rm global ms_service_mode ceph config rm global rbd_default_map_options ceph config dump | grep ms_", "ceph orch ls --format plain | tail -n +2 | awk '{print USD1}' | xargs -I {} ceph orch restart {} Scheduled to restart alertmanager.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-0 on host 'osd-0' Scheduled to restart ceph-exporter.osd-2 on host 'osd-2' Scheduled to restart ceph-exporter.osd-3 on host 'osd-3' Scheduled to restart ceph-exporter.osd-1 on host 'osd-1' Scheduled to restart crash.osd-0 on host 'osd-0' Scheduled to restart crash.osd-2 on host 'osd-2' Scheduled to restart crash.osd-3 on host 'osd-3' Scheduled to restart crash.osd-1 on host 'osd-1' Scheduled to restart grafana.osd-0 on host 'osd-0' Scheduled to restart mds.fsvol001.osd-0.lpciqk on host 'osd-0' Scheduled to restart mds.fsvol001.osd-2.wocnxz on host 'osd-2' Scheduled to restart mgr.osd-0.dtkyni on host 'osd-0' Scheduled to restart mgr.osd-2.kqcxwu on host 'osd-2' Scheduled to restart mon.osd-2 on host 'osd-2' Scheduled to restart mon.osd-3 on host 'osd-3' Scheduled to restart mon.osd-1 on host 'osd-1' Scheduled to restart node-exporter.osd-0 on host 'osd-0' Scheduled to restart node-exporter.osd-2 on host 'osd-2' Scheduled to restart node-exporter.osd-3 on host 'osd-3' Scheduled to restart node-exporter.osd-1 on host 'osd-1' Scheduled to restart osd.1 on host 'osd-0' Scheduled to restart osd.4 on host 'osd-0' Scheduled to restart osd.0 on host 'osd-2' Scheduled to restart osd.5 on host 'osd-2' Scheduled to restart osd.2 on host 'osd-3' Scheduled to restart osd.6 on host 'osd-3' Scheduled to restart osd.3 on host 'osd-1' Scheduled to restart osd.7 on host 'osd-1' Scheduled to restart prometheus.osd-0 on host 'osd-0' Scheduled to restart rgw.rgw.ssl.osd-1.smzpfj on host 'osd-1'", "ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID alertmanager.osd-0 osd-0 *:9093,9094 running (116s) 9s ago 10h 19.5M - 0.26.0 7dbf12091920 4694a72d4bbd ceph-exporter.osd-0 osd-0 running (19s) 9s ago 10h 7310k - 18.2.1-229.el9cp 3fd804e38f5b 49bdc7d99471 ceph-exporter.osd-1 osd-1 running (97s) 26s ago 10h 7285k - 18.2.1-229.el9cp 3fd804e38f5b 7000d59d23b4 ceph-exporter.osd-2 osd-2 running (76s) 26s ago 10h 7306k - 18.2.1-229.el9cp 3fd804e38f5b 3907515cc352 ceph-exporter.osd-3 osd-3 running (49s) 26s ago 10h 6971k - 18.2.1-229.el9cp 3fd804e38f5b 3f3952490780 crash.osd-0 osd-0 running (17s) 9s ago 10h 6878k - 18.2.1-229.el9cp 3fd804e38f5b 38e041fb86e3 crash.osd-1 osd-1 running (96s) 26s ago 10h 6895k - 18.2.1-229.el9cp 3fd804e38f5b 21ce3ef7d896 crash.osd-2 osd-2 running (74s) 26s ago 10h 6899k - 18.2.1-229.el9cp 3fd804e38f5b 210ca9c8d928 crash.osd-3 osd-3 running (47s) 26s ago 10h 6899k - 18.2.1-229.el9cp 3fd804e38f5b 710d42d9d138 grafana.osd-0 osd-0 *:3000 running (114s) 9s ago 10h 72.9M - 10.4.0-pre f142b583a1b1 3dc5e2248e95 mds.fsvol001.osd-0.qjntcu osd-0 running (99s) 9s ago 10h 17.5M - 18.2.1-229.el9cp 3fd804e38f5b 50efa881c04b mds.fsvol001.osd-2.qneujv osd-2 running (51s) 26s ago 10h 15.3M - 18.2.1-229.el9cp 3fd804e38f5b a306f2d2d676 mgr.osd-0.zukgyq osd-0 *:9283,8765,8443 running (21s) 9s ago 10h 442M - 18.2.1-229.el9cp 3fd804e38f5b 8ef9b728675e mgr.osd-1.jqfyal osd-1 *:8443,9283,8765 running (92s) 26s ago 10h 480M - 18.2.1-229.el9cp 3fd804e38f5b 1ab52db89bfd mon.osd-1 osd-1 running (90s) 26s ago 10h 41.7M 2048M 18.2.1-229.el9cp 3fd804e38f5b 88d1fe1e10ac mon.osd-2 osd-2 running (72s) 26s ago 10h 31.1M 2048M 18.2.1-229.el9cp 3fd804e38f5b 02f57d3bb44f mon.osd-3 osd-3 running (45s) 26s ago 10h 24.0M 2048M 18.2.1-229.el9cp 3fd804e38f5b 5e3783f2b4fa node-exporter.osd-0 osd-0 *:9100 running (15s) 9s ago 10h 7843k - 1.7.0 8c904aa522d0 2dae2127349b node-exporter.osd-1 osd-1 *:9100 running (94s) 26s ago 10h 11.2M - 1.7.0 8c904aa522d0 010c3fcd55cd node-exporter.osd-2 osd-2 *:9100 running (69s) 26s ago 10h 17.2M - 1.7.0 8c904aa522d0 436f2d513f31 node-exporter.osd-3 osd-3 *:9100 running (41s) 26s ago 10h 12.4M - 1.7.0 8c904aa522d0 5579f0d494b8 osd.0 osd-0 running (109s) 9s ago 10h 126M 4096M 18.2.1-229.el9cp 3fd804e38f5b 997076cd39d4 osd.1 osd-1 running (85s) 26s ago 10h 139M 4096M 18.2.1-229.el9cp 3fd804e38f5b 08b720f0587d osd.2 osd-2 running (65s) 26s ago 10h 143M 4096M 18.2.1-229.el9cp 3fd804e38f5b 104ad4227163 osd.3 osd-3 running (36s) 26s ago 10h 94.5M 1435M 18.2.1-229.el9cp 3fd804e38f5b db8b265d9f43 osd.4 osd-0 running (104s) 9s ago 10h 164M 4096M 18.2.1-229.el9cp 3fd804e38f5b 50dcbbf7e012 osd.5 osd-1 running (80s) 26s ago 10h 131M 4096M 18.2.1-229.el9cp 3fd804e38f5b 63b21fe970b5 osd.6 osd-3 running (32s) 26s ago 10h 243M 1435M 18.2.1-229.el9cp 3fd804e38f5b 26c7ba208489 osd.7 osd-2 running (61s) 26s ago 10h 130M 4096M 18.2.1-229.el9cp 3fd804e38f5b 871a2b75e64f prometheus.osd-0 osd-0 *:9095 running (12s) 9s ago 10h 44.6M - 2.48.0 58069186198d e49a064d2478 rgw.rgw.ssl.osd-1.bsmbgd osd-1 *:80 running (78s) 26s ago 10h 75.4M - 18.2.1-229.el9cp 3fd804e38f5b d03c9f7ae4a4", "oc patch storagecluster ocs-external-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/network\", \"value\": {\"connections\": {\"encryption\": {\"enabled\": false}}} }]' storagecluster.ocs.openshift.io/ocs-external-storagecluster patched", "oc get storagecluster NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 12h Ready true 2024-11-06T20:48:03Z 4.18.0", "oc get storagecluster ocs-external-storagecluster -o yaml | yq '.spec.network.connections' encryption: enabled: false", "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'", "oc describe noobaa", "oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d", "oc describe pod <image-registry-name>", "oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>", "oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>", "apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]", "oc get clusterresourcequota -A oc describe clusterresourcequota -A", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.200.0/24\", \"routes\": [ {\"dst\": \"NODE_IP_CIDR\"} ] } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'", "oc get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py", "oc get cm rook-ceph-external-cluster-script-config -n openshift-storage -o jsonpath='{.data.script}' | base64 --decode > ceph-external-cluster-details-exporter.py", "python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix", "caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]", "spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"", "adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule", "Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule", "volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>", "volumes: - name: mypd persistentVolumeClaim: claimName: myclaim", "volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>", "volumes: - name: mypd persistentVolumeClaim: claimName: myclaim", "oc get pvc data-pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO ocs-storagecluster-ceph-rbd 20h", "oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@monthly\"", "persistentvolumeclaim/data-pvc annotated", "oc get reclaimspacecronjobs.csiaddons.openshift.io", "NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @monthly 3s", "oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@weekly\" --overwrite=true", "persistentvolumeclaim/data-pvc annotated", "oc get reclaimspacecronjobs.csiaddons.openshift.io", "NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 @weekly 3s", "oc get reclaimspacecronjobs -o jsonpath='{range .items[?(@.spec.jobTemplate.spec.target.persistentVolumeClaim==\"<PVC_NAME>\")]}{.metadata.name}{\"\\n\"}{end}'", "oc annotate reclaimspacecronjobs <RECLAIMSPACECRONJOB_NAME> \"csiaddons.openshift.io/state=unmanaged\" --overwrite=true", "oc patch reclaimspacecronjobs <RECLAIMSPACECRONJOB_NAME> -p '{\"spec\": {\"suspend\": true}}' --type=merge", "apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceJob metadata: name: sample-1 spec: target: persistentVolumeClaim: pvc-1 timeout: 360", "apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceCronJob metadata: name: reclaimspacecronjob-sample spec: jobTemplate: spec: target: persistentVolumeClaim: data-pvc timeout: 360 schedule: '@weekly' concurrencyPolicy: Forbid", "Status: Completion Time: 2023-03-08T18:56:18Z Conditions: Last Transition Time: 2023-03-08T18:56:18Z Message: Failed to make controller request: context deadline exceeded Observed Generation: 1 Reason: failed Status: True Type: Failed Message: Maximum retry limit reached Result: Failed Retries: 6 Start Time: 2023-03-08T18:33:55Z", "apiVersion: v1 kind: ConfigMap metadata: name: csi-addons-config namespace: openshift-storage data: \"reclaim-space-timeout\": \"6m\"", "delete po -n openshift-storage -l \"app.kubernetes.io/name=csi-addons\"", "odf subvolume ls --stale", "Filesystem Subvolume Subvolumegroup State ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110004 csi stale ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110005 csi stale", "odf subvolume delete <subvolumes> <filesystem> <subvolumegroup>", "odf subvolume delete csi-vol-427774b4-340b-11ed-8d66-0242ac110004,csi-vol-427774b4-340b-11ed-8d66-0242ac110005 ocs-storagecluster csi", "Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted", "oc edit configmap rook-ceph-operator-config -n openshift-storage", "oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml", "apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]", "oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>", "oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephFilesystems/dataPoolSpec/replicated/size\", \"value\": 2 }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched", "oc get cephfilesystem ocs-storagecluster-cephfilesystem -o=jsonpath='{.spec.dataPools}' | jq [ { \"application\": \"\", \"deviceClass\": \"ssd\", \"erasureCoded\": { \"codingChunks\": 0, \"dataChunks\": 0 }, \"failureDomain\": \"zone\", \"mirroring\": {}, \"quotas\": {}, \"replicated\": { \"replicasPerFailureDomain\": 1, \"size\": 2, \"targetSizeRatio\": 0.49 }, \"statusCheck\": { \"mirror\": {} } } ]", "ceph osd pool ls | grep filesystem ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephfilesystem-data0", "oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{\"spec\": {\"nfs\":{\"enable\": true}}}'", "-n openshift-storage describe cephnfs ocs-storagecluster-cephnfs", "-n openshift-storage get pod | grep csi-nfsplugin", "csi-nfsplugin-47qwq 2/2 Running 0 10s csi-nfsplugin-77947 2/2 Running 0 10s csi-nfsplugin-ct2pm 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-2rm2w 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-8nj5h 2/2 Running 0 10s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <desired_name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-nfs", "apiVersion: v1 kind: Pod metadata: name: nfs-export-example spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: <pvc_name> readOnly: false", "apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: <volume_name> mountPath: /var/lib/www/html", "apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html", "volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>", "volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: my-nfs-export", "oc get pods -n openshift-storage | grep rook-ceph-nfs", "oc describe pod <name of the rook-ceph-nfs pod> | grep ceph_nfs", "oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs ceph_nfs=my-nfs", "apiVersion: v1 kind: Service metadata: name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer namespace: openshift-storage spec: ports: - name: nfs port: 2049 type: LoadBalancer externalTrafficPolicy: Local selector: app: rook-ceph-nfs ceph_nfs: <my-nfs> instance: a", "oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d", "get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d", "oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}' /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215", "oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}' [{\"hostname\":\"ingress-id.somedomain.com\"}]", "mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path", "odf get recovery-profile", "odf set recovery-profile <option>", "odf get recovery-profile", "odf set full 0.9", "odf set full 0.92", "odf set full 0.85", "odf set backfillfull 0.85", "odf set nearfull 0.8", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/fullRatio\", \"value\": 0.90 }]'", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/backfillFullRatio\", \"value\": 0.85 }]'", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/nearFullRatio\", \"value\": 0.8 }]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/managing_and_allocating_storage_resources/overriding-vault-connection-details-using-tenant-configmap_rhodf
Appendix H. Other Technical Documentation
Appendix H. Other Technical Documentation To learn more about anaconda , the Red Hat Enterprise Linux installation program, visit the project Web page: https://fedoraproject.org/wiki/Anaconda . Both anaconda and Red Hat Enterprise Linux systems use a common set of software components. For detailed information on key technologies, refer to the Web sites listed below: Boot Loader Red Hat Enterprise Linux uses the GRUB boot loader. Refer to http://www.gnu.org/software/grub/ for more information. Disk Partitioning Red Hat Enterprise Linux uses parted to partition disks. Refer to http://www.gnu.org/software/parted/ for more information. Storage Management Logical Volume Management (LVM) provides administrators with a range of facilities to manage storage. By default, the Red Hat Enterprise Linux installation process formats drives as LVM volumes. Refer to http://www.tldp.org/HOWTO/LVM-HOWTO/ for more information. Audio Support The Linux kernel used by Red Hat Enterprise Linux incorporates PulseAudio audio server. For more information about PulseAudio, refer to the project documentation: http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/ . Graphics System Both the installation system and Red Hat Enterprise Linux use the Xorg suite to provide graphical capabilities. Components of Xorg manage the display, keyboard and mouse for the desktop environments that users interact with. Refer to http://www.x.org/ for more information. Remote Displays Red Hat Enterprise Linux and anaconda include VNC (Virtual Network Computing) software to enable remote access to graphical displays. For more information about VNC, refer to the documentation on the RealVNC Web site: http://www.realvnc.com/support/documentation.html . Command-line Interface By default, Red Hat Enterprise Linux uses the GNU bash shell to provide a command-line interface. The GNU Core Utilities complete the command-line environment. Refer to http://www.gnu.org/software/bash/bash.html for more information on bash . To learn more about the GNU Core Utilities, refer to http://www.gnu.org/software/coreutils/ . Remote System Access Red Hat Enterprise Linux incorporates the OpenSSH suite to provide remote access to the system. The SSH service enables a number of functions, which include access to the command-line from other systems, remote command execution, and network file transfers. During the installation process anaconda may use the scp feature of OpenSSH to transfer crash reports to remote systems. Refer to the OpenSSH Web site for more information: http://www.openssh.com/ . Access Control SELinux provides Mandatory Access Control (MAC) capabilities that supplement the standard Linux security features. Refer to the SELinux Project Pages for more information: http://www.nsa.gov/research/selinux/index.shtml . Firewall The Linux kernel used by Red Hat Enterprise Linux incorporates the netfilter framework to provide firewall features. The Netfilter project website provides documentation for both netfilter , and the iptables administration facilities: http://netfilter.org/documentation/index.html . Software Installation Red Hat Enterprise Linux uses yum to manage the RPM packages that make up the system. Refer to http://yum.baseurl.org/ for more information. Virtualization Virtualization provides the capability to simultaneously run multiple operating systems on the same computer. Red Hat Enterprise Linux also includes tools to install and manage the secondary systems on a Red Hat Enterprise Linux host. You may select virtualization support during the installation process, or at any time thereafter. Refer to the Red Hat Enterprise Linux Virtualization documentation available from https://access.redhat.com/documentation/en/red-hat-enterprise-linux/ for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ap-techref
Chapter 3. OpenShift Virtualization release notes
Chapter 3. OpenShift Virtualization release notes 3.1. About Red Hat OpenShift Virtualization Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects. OpenShift Virtualization is represented by the icon. You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider. Learn more about what you can do with OpenShift Virtualization . 3.1.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.9 is supported for use on OpenShift Container Platform 4.9 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform. 3.1.2. Supported guest operating systems OpenShift Virtualization guests can use the following operating systems: Red Hat Enterprise Linux 6, 7, and 8. Red Hat Enterprise Linux 9 Alpha (Technology Preview). Microsoft Windows Server 2012 R2, 2016, and 2019. Microsoft Windows 10. Other operating system templates shipped with OpenShift Virtualization are not supported. 3.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 3.3. New and changed features OpenShift Virtualization is certified in Microsoft's Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads. The SVVP Certification applies to: Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS . Intel and AMD CPUs. High-performance virtual machine templates are now available for supported Windows operating systems . If your OpenShift Virtualization Operator subscription used any update channel other than stable , it is now automatically subscribed to the stable channel. This single update channel delivers z-stream and minor version updates and ensures that your OpenShift Virtualization and OpenShift Container Platform versions are compatible. You can now use the virtctl guestfs command to maintain, repair, and debug virtual machine disks . You can now boot virtual machines with EFI mode without mandatory Secure Boot. 3.3.1. Quick starts Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts . You can filter the available tours by entering the virtualization keyword in the Filter field. 3.3.2. Installation You can now deploy OpenShift Virtualization on FIPS-enabled clusters . You can now download the virtctl client even if the cluster is offline by using the ConsoleCLIDownload custom resource (CR). 3.3.3. Networking You can now enable or disable MAC spoof filtering on secondary networks by configuring a Linux bridge network attachment definition in the CLI. 3.3.4. Storage You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy . Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance. In addition to previously available cloning methods such as snapshots and host-assisted cloning, you can now specify csi-clone as the default cloning behavior, which uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. You can now take a snapshot of an online virtual machine . If the QEMU guest agent is installed, the file system is quiesced when taking the snapshot, maximizing data integrity. 3.3.5. Web console You can now automate your Windows virtual machine setup by uploading answer files in XML format in the Advanced SysPrep section of the Create virtual machine from template wizard. You can use the OpenShift Virtualization dashboard in the web console to get data on resource consumption for virtual machines and associated pods. The dashboard provides visual representations of cluster metrics so you can quickly understand the state of your cluster. 3.4. Removed features Removed features are not supported in the current release. Importing a single virtual machine from Red Hat Virtualization (RHV) or VMware is removed from OpenShift Virtualization 4.9. This feature is replaced by the Migration Toolkit for Virtualization . 3.5. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope You can now enable automatic updates for OpenShift Virtualization workloads, such as virt-launcher pods. Configure workload update strategies by editing the HyperConverged custom resource. You can now hot-plug and hot-unplug virtual disks when you want to add or remove them from your virtual machine without stopping the virtual machine instance. You can now use the Red Hat Enterprise Linux 9 Alpha template to create virtual machines. You can now deploy OpenShift Virtualization on AWS bare metal nodes . 3.6. Bug fixes The Template provider menu in the web console no longer offers "Red Hat Supported" as a template search filter, to avoid confusion with the "Red Hat Provided" filter. ( BZ#1952737 ) The KubeVirt plugin now checks the API version available and uses the correct version, rather than defaulting to the v1 API version, which resulted in an API mismatch and prevented virtual machine creation. ( BZ#1977037 ), ( BZ#1979114 ) The Red Hat Enterprise Linux (RHEL) 6 template is no longer prioritized in the web console. ( BZ#1978200 ) The Red Hat Enterprise Linux (RHEL) 6 template is no longer labeled as a community-provided template in the web console. ( BZ#1978202 ) The web console can now retrieve more information from virtual machines, including time zone and number of active users. ( BZ#1979190 ) Live migration between nodes with incompatible CPUs is now prevented on clusters containing nodes that are not configured identically. ( BZ#1760028 ) If you initially deployed OpenShift Virtualization version 2.4.z or earlier, you can now upgrade to the latest version without using a workaround. ( BZ#1986989 ) If you run OpenShift Virtualization 2.6.5 with OpenShift Container Platform 4.8 or later, you can now create a virtual machine from the Customize wizard. ( BZ#1979116 ) RHV VM import no longer fails if the VM affinity policy is set to Migratable rather than Pinned . ( BZ#1977277 ) Selecting Create With Import wizard on the Virtualization page of the OpenShift Virtualization web console no longer results in an erroneous error message. ( BZ#1974812 ) 3.7. Known issues If you use OpenShift Virtualization on OpenShift Container Platform 4.9.4 or earlier with the Border Gateway Protocol daemon running and then you modify the network interface with BPG route entries, the BPG routes will be converted into static routes. nmstate-1.0.2-14.el8_4.noarch , which ships with OpenShift Container Platform 4.9.4, does not handle the Bird Internet Routing Daemon protocol correctly. You can prevent this issue by upgrading your cluster to OpenShift Container Platform 4.9.5 or later. If BGP routes have already been converted to static routes, you must remove the static routes from the network interface and add the routes manually. Updating to OpenShift Virtualization 4.9.6 causes some virtual machines (VMs) to get stuck in a live migration loop. This occurs if the spec.volumes.containerDisk.path field in the VM manifest is set to a relative path. As a workaround, delete and recreate the VM manifest, setting the value of the spec.volumes.containerDisk.path field to an absolute path. You can then update OpenShift Virtualization. If you hot-plug a virtual disk and then force delete the virt-launcher pod, you might lose data. This is due to a race condition that can cause the VM disk's contents to be wiped from the persistent volume. ( BZ#2007397 ) Editing a virtual machine fails if the VM references a deleted template that was provided by OpenShift Virtualization before version 4.8. In OpenShift Virtualization 4.8 and later, deleted OpenShift Virtualization-provided templates are automatically recreated by the OpenShift Virtualization Operator. If a cloning operation is initiated before the source is available to be cloned, the operation stalls indefinitely. This is because the clone authorization expires before the cloning operation starts. ( BZ#1855182 ) As a workaround, delete the DataVolume object that is requesting the clone. When the source is available, recreate the DataVolume object that you deleted so that the cloning operation can complete successfully. If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding to the default interface of a host because of a change in the host network topology of OVN-Kubernetes. ( BZ#1885605 ) As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider. Running virtual machines that cannot be live migrated might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath provisioner storage or SR-IOV network interfaces. As a workaround, you can reconfigure the virtual machines so that they can be powered off during a cluster upgrade. In the spec section of the virtual machine configuration file: Remove the evictionStrategy: LiveMigrate field. See Configuring virtual machine eviction strategy for more information on how to configure eviction strategy. Set the runStrategy field to Always . As a workaround, set the default CPU model by running the following command: Note You must make this change before starting the virtual machines that support live migration. USD oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[ { "op": "add", "path": "/spec/configuration/cpuModel", "value": "<cpu_model>" 1 } ]' 1 Replace <cpu_model> with the actual CPU model value. You can determine this value by running oc describe node <node> for all nodes and looking at the cpu-model-<name> labels. Select the CPU model that is present on all of your nodes. If you enter the wrong credentials for the RHV Manager while importing a RHV VM, the Manager might lock the admin user account because the vm-import-operator tries repeatedly to connect to the RHV API. ( BZ#1887140 ) To unlock the account, log in to the Manager and enter the following command: USD ovirt-aaa-jdbc-tool user unlock admin If you run OpenShift Virtualization 2.6.5 with OpenShift Container Platform 4.8 or later, various issues occur. You can avoid these issues by upgrading OpenShift Virtualization to version 4.8 or later. In the web console, if you navigate to the Virtualization page and select Create With YAML the following error message is displayed: The server doesn't have a resource type "kind: VirtualMachine, apiVersion: kubevirt.io/v1" As a workaround, edit the VirtualMachine manifest so the apiVersion is kubevirt.io/v1alpha3 . For example: apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: annotations: ... ( BZ#1979114 ) When connecting to the VNC console by using the OpenShift Virtualization web console, the VNC console always fails to respond. As a workaround, create the virtual machine from the CLI or upgrade to OpenShift Virtualization 4.8. ( BZ#1977037 )
[ "oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[ { \"op\": \"add\", \"path\": \"/spec/configuration/cpuModel\", \"value\": \"<cpu_model>\" 1 } ]'", "ovirt-aaa-jdbc-tool user unlock admin", "The server doesn't have a resource type \"kind: VirtualMachine, apiVersion: kubevirt.io/v1\"", "apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: annotations:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/virtualization/virt-4-9-release-notes
Chapter 4. Upgrading a geo-replication deployment of standalone Red Hat Quay
Chapter 4. Upgrading a geo-replication deployment of standalone Red Hat Quay Use the following procedure to upgrade your geo-replication Red Hat Quay deployment. Important When upgrading geo-replication Red Hat Quay deployments to the y-stream release (for example, Red Hat Quay 3.7 Red Hat Quay 3.8), or geo-replication deployments, you must stop operations before upgrading. There is intermittent downtime down upgrading from one y-stream release to the . It is highly recommended to back up your Red Hat Quay deployment before upgrading. Prerequisites You have logged into registry.redhat.io Procedure This procedure assumes that you are running Red Hat Quay services on three (or more) systems. For more information, see Preparing for Red Hat Quay high availability . Obtain a list of all Red Hat Quay instances on each system running a Red Hat Quay instance. Enter the following command on System A to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01 Enter the following command on System B to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02 Enter the following command on System C to reveal the Red Hat Quay instances: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03 Temporarily shut down all Red Hat Quay instances on each system. Enter the following command on System A to shut down the Red Hat Quay instance: USD sudo podman stop ec16ece208c0 Enter the following command on System B to shut down the Red Hat Quay instance: USD sudo podman stop 7ae0c9a8b37d Enter the following command on System C to shut down the Red Hat Quay instance: USD sudo podman stop e75c4aebfee9 Obtain the latest Red Hat Quay version, for example, Red Hat Quay 3.12, on each system. Enter the following command on System A to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0 Enter the following command on System B to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0 Enter the following command on System C to obtain the latest Red Hat Quay version: USD sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0 On System A of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3.12: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay01 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0 Wait for the new Red Hat Quay container to become fully operational on System A. You can check the status of the container by entering the following command: USD sudo podman ps Example output CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v3.8.0 registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01 Optional: Ensure that Red Hat Quay is fully operation by navigating to the Red Hat Quay UI. After ensuring that Red Hat Quay on System A is fully operational, run the new image versions on System B and on System C. On System B of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3.12: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay02 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0 On System C of your highly available Red Hat Quay deployment, run the new image version, for example, Red Hat Quay 3.12: # sudo podman run --restart=always -p 443:8443 -p 80:8080 \ --sysctl net.core.somaxconn=4096 \ --name=quay03 \ -v /mnt/quay/config:/conf/stack:Z \ -v /mnt/quay/storage:/datastorage:Z \ -d registry.redhat.io/quay/quay-rhel8:v3.8.0 You can check the status of the containers on System B and on System C by entering the following command: USD sudo podman ps
[ "sudo podman ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec16ece208c0 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 6 minutes ago Up 6 minutes ago 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp quay01", "sudo podman ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ae0c9a8b37d registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 5 minutes ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay02", "sudo podman ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e75c4aebfee9 registry.redhat.io/quay/quay-rhel8:v3.7.0 registry 4 seconds ago Up 4 seconds ago 0.0.0.0:84->8080/tcp, 0.0.0.0:447->8443/tcp quay03", "sudo podman stop ec16ece208c0", "sudo podman stop 7ae0c9a8b37d", "sudo podman stop e75c4aebfee9", "sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0", "sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0", "sudo podman pull registry.redhat.io/quay/quay-rhel8:v3.8.0", "sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay01 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.8.0", "sudo podman ps", "CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b9f38c3fb4 registry.redhat.io/quay/quay-rhel8:v3.8.0 registry 2 seconds ago Up 2 seconds ago 0.0.0.0:82->8080/tcp, 0.0.0.0:445->8443/tcp quay01", "sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay02 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.8.0", "sudo podman run --restart=always -p 443:8443 -p 80:8080 --sysctl net.core.somaxconn=4096 --name=quay03 -v /mnt/quay/config:/conf/stack:Z -v /mnt/quay/storage:/datastorage:Z -d registry.redhat.io/quay/quay-rhel8:v3.8.0", "sudo podman ps" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/upgrade_red_hat_quay/upgrading-geo-repl-quay
Release notes for the Red Hat build of Cryostat 2.0
Release notes for the Red Hat build of Cryostat 2.0 Red Hat build of Cryostat 2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.0/index
3.12. Software Collection Lock File Support
3.12. Software Collection Lock File Support By default, programs packaged into a Software Collection create lock files in the /opt/ provider /%{scl}/root/var/lock/ directory. To make lock files more accessible and easier to manage, you are advised to use the nfsmountable macro that redefines the _localstatedir macro. This results in lock files being created underneath the /var/opt/ provider /%{scl}/lock/ directory, outside of the /opt/ provider /%{scl} file system hierarchy. If applications or services packaged into your Software Collection write the lock underneath the /var/opt/ provider /%{scl}/lock/ directory, then those applications and services can run concurrently with the system versions (when the resources of your Software Collection's applications and services will not conflict with the system versions' resources). For example, a lock file mylockfile.lock is normally created in the /var/lock/ directory in the base system installation. If the lock file is a part of a software_collection Software Collection and the nfsmountable macro is defined, the path to the lock file in software_collection is as follows: For more information on using the nfsmountable macro, see Section 3.1, "Using Software Collections over NFS" . Preventing Programs from Running Concurrently If you want to prevent your Software Collection's applications or services from running while the system version of the respective application or service is running, make sure that your applications or services, which require a lock, write the lock to the system directory /var/lock/ . In this way, your applications or services' lock file will not be overwritten. The lock file will not be renamed and the name stays the same as the system version. 3.12.1. Software Collection SysV init Lock File Support When a service is started by an init script, a lock file is touched in the /var/lock/subsys/ directory with the same name as the init script. As discussed in Section 3.4, "Managing Services in Software Collections" , service names include a Software Collection prefix. Use the same naming convention for files underneath /var/lock/subsys/ to ensure that the lock file names do not conflict with the base system installation.
[ "/var/opt/ provider / software_collection /lock/ mylockfile.lock" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-software_collection_lock_file_support
function::randint
function::randint Name function::randint - Return a random number between [0,n) Synopsis Arguments n Number past upper limit of range, not larger than 2**20.
[ "randint:long(n:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-randint
Using SELinux
Using SELinux Red Hat Enterprise Linux 9 Prevent users and processes from performing unauthorized interactions with files and devices by using Security-Enhanced Linux (SELinux) Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_selinux/index
Chapter 7. Bucket policies in the Multicloud Object Gateway
Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. About bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. See the following example: There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . Instructions for creating S3 users can be found in Section 7.3, "Creating an AWS S3 user in the Multicloud Object Gateway" . Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. 7.3. Creating an AWS S3 user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card, click Storage System and click the storage system link from the pop up that appears. In the Object tab, click the Multicloud Object Gateway link. Under the Accounts tab, click Create Account . Select S3 Access Only , provide the Account Name , for example, [email protected] . Click . Select S3 default placement , for example, noobaa-default-backing-store . Select Buckets Permissions . A specific bucket or all buckets can be selected. Click Create .
[ "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_hybrid_and_multicloud_resources/bucket-policies-in-the-multicloud-object-gateway
Chapter 6. Content distribution with Red Hat Quay
Chapter 6. Content distribution with Red Hat Quay Content distribution features in Red Hat Quay include: Repository mirroring Geo-replication Deployment in air-gapped environments 6.1. Repository mirroring Red Hat Quay repository mirroring lets you mirror images from external container registries, or another local registry, into your Red Hat Quay cluster. Using repository mirroring, you can synchronize images to Red Hat Quay based on repository names and tags. From your Red Hat Quay cluster with repository mirroring enabled, you can perform the following: Choose a repository from an external registry to mirror Add credentials to access the external registry Identify specific container image repository names and tags to sync Set intervals at which a repository is synced Check the current state of synchronization To use the mirroring functionality, you need to perform the following actions: Enable repository mirroring in the Red Hat Quay configuration file Run a repository mirroring worker Create mirrored repositories All repository mirroring configurations can be performed using the configuration tool UI or by the Red Hat Quay API. 6.1.1. Using repository mirroring The following list shows features and limitations of Red Hat Quay repository mirroring: With repository mirroring, you can mirror an entire repository or selectively limit which images are synced. Filters can be based on a comma-separated list of tags, a range of tags, or other means of identifying tags through Unix shell-style wildcards. For more information, see the documentation for wildcards . When a repository is set as mirrored, you cannot manually add other images to that repository. Because the mirrored repository is based on the repository and tags you set, it will hold only the content represented by the repository and tag pair. For example if you change the tag so that some images in the repository no longer match, those images will be deleted. Only the designated robot can push images to a mirrored repository, superseding any role-based access control permissions set on the repository. Mirroring can be configured to rollback on failure, or to run on a best-effort basis. With a mirrored repository, a user with read permissions can pull images from the repository but cannot push images to the repository. Changing settings on your mirrored repository can be performed in the Red Hat Quay user interface, using the Repositories Mirrors tab for the mirrored repository you create. Images are synced at set intervals, but can also be synced on demand. 6.1.2. Repository mirroring recommendations Best practices for repository mirroring include the following: Repository mirroring pods can run on any node. This means that you can run mirroring on nodes where Red Hat Quay is already running. Repository mirroring is scheduled in the database and runs in batches. As a result, repository workers check each repository mirror configuration file and reads when the sync needs to be. More mirror workers means more repositories can be mirrored at the same time. For example, running 10 mirror workers means that a user can run 10 mirroring operators in parallel. If a user only has 2 workers with 10 mirror configurations, only 2 operators can be performed. The optimal number of mirroring pods depends on the following conditions: The total number of repositories to be mirrored The number of images and tags in the repositories and the frequency of changes Parallel batching For example, if a user is mirroring a repository that has 100 tags, the mirror will be completed by one worker. Users must consider how many repositories one wants to mirror in parallel, and base the number of workers around that. Multiple tags in the same repository cannot be mirrored in parallel. 6.1.3. Event notifications for mirroring There are three notification events for repository mirroring: Repository Mirror Started Repository Mirror Success Repository Mirror Unsuccessful The events can be configured inside of the Settings tab for each repository, and all existing notification methods such as email, Slack, Quay UI, and webhooks are supported. 6.1.4. Mirroring API You can use the Red Hat Quay API to configure repository mirroring: Mirroring API More information is available in the Red Hat Quay API Guide 6.2. Geo-replication Note Currently, the geo-replication feature is not supported on IBM Power and IBM Z. Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. 6.2.1. Geo-replication features When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region. After the initial push, image data will be replicated in the background to other storage engines. The list of replication locations is configurable and those can be different storage backends. An image pull will always use the closest available storage engine, to maximize pull performance. If replication has not been completed yet, the pull will use the source storage backend instead. 6.2.2. Geo-replication requirements and constraints In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region's object storage. Object storage must be geographically accessible by all other regions. In case of an object storage system failure of one geo-replicating site, that site's Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures. Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status. To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health. If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites. Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure. A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions. Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database. A single Redis cache is shared across the entire Red Hat Quay setup and needs to accessible by all Red Hat Quay pods. The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Geo-replication requires object storage in each region. It does not work with local storage. Each region must be able to access every storage engine in each region, which requires a network path. Alternatively, the storage proxy option can be used. The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image. All Red Hat Quay instances must share the same entrypoint, typically through a load balancer. All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file. Geo-replication requires your Clair configuration to be set to unmanaged . An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration . Geo-Replication requires SSL/TLS certificates and keys. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions. 6.2.3. Geo-replication using standalone Red Hat Quay In the following image, Red Hat Quay is running standalone in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Red Hat Quay instance, and will then be replicated, in the background, to the other storage engines. Note If Clair fails in one cluster, for example, the US cluster, US users would not see vulnerability reports in Red Hat Quay for the second cluster (EU). This is because all Clair instances have the same state. When Clair fails, it is usually because of a problem within the cluster. Geo-replication architecture 6.2.4. Geo-replication using the Red Hat Quay Operator In the example shown above, the Red Hat Quay Operator is deployed in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines. Because the Operator now manages the Clair security scanner and its database separately, geo-replication setups can be leveraged so that they do not manage the Clair database. Instead, an external shared database would be used. Red Hat Quay and Clair support several providers and vendors of PostgreSQL, which can be found in the Red Hat Quay 3.x test matrix . Additionally, the Operator also supports custom Clair configurations that can be injected into the deployment, which allows users to configure Clair with the connection credentials for the external database. 6.2.5. Mixed storage for geo-replication Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following: A VPN to prevent visibility of the internal storage, or A token pair that only allows access to the specified bucket used by Red Hat Quay This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication. 6.3. Repository mirroring compared to geo-replication Red Hat Quay geo-replication mirrors the entire image storage backend data between 2 or more different storage backends while the database is shared, for example, one Red Hat Quay registry with two different blob storage endpoints. The primary use cases for geo-replication include the following: Speeding up access to the binary blobs for geographically dispersed setups Guaranteeing that the image content is the same across regions Repository mirroring synchronizes selected repositories, or subsets of repositories, from one registry to another. The registries are distinct, with each registry having a separate database and separate image storage. The primary use cases for mirroring include the following: Independent registry deployments in different data centers or regions, where a certain subset of the overall content is supposed to be shared across the data centers and regions Automatic synchronization or mirroring of selected (allowlisted) upstream repositories from external registries into a local Red Hat Quay deployment Note Repository mirroring and geo-replication can be used simultaneously. Table 6.1. Red Hat Quay Repository mirroring and geo-replication comparison Feature / Capability Geo-replication Repository mirroring What is the feature designed to do? A shared, global registry Distinct, different registries What happens if replication or mirroring has not been completed yet? The remote copy is used (slower) No image is served Is access to all storage backends in both regions required? Yes (all Red Hat Quay nodes) No (distinct storage) Can users push images from both sites to the same repository? Yes No Is all registry content and configuration identical across all regions (shared database)? Yes No Can users select individual namespaces or repositories to be mirrored? No Yes Can users apply filters to synchronization rules? No Yes Are individual / different role-base access control configurations allowed in each region No Yes 6.4. Air-gapped or disconnected deployments In the following diagram, the upper deployment in the diagram shows Red Hat Quay and Clair connected to the internet, with an air-gapped OpenShift Container Platform cluster accessing the Red Hat Quay registry through an explicit, allowlisted hole in the firewall. The lower deployment in the diagram shows Red Hat Quay and Clair running inside of the firewall, with image and CVE data transferred to the target system using offline media. The data is exported from a separate Red Hat Quay and Clair deployment that is connected to the internet. The following diagram shows how Red Hat Quay and Clair can be deployed in air-gapped or disconnected environments: Red Hat Quay and Clair in disconnected, or air-gapped, environments
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_architecture/content-distrib-intro
Chapter 1. Deploying and configuring OpenStack Key Manager (barbican)
Chapter 1. Deploying and configuring OpenStack Key Manager (barbican) OpenStack Key Manager (barbican) is the secrets manager for Red Hat OpenStack Platform. You can use the barbican API and command line to centrally manage the certificates, keys, and passwords used by OpenStack services. Barbican is not enabled by default in Red Hat OpenStack Platform. You can deploy barbican in an existing OpenStack deployment. Barbican currently supports the following use cases described in this guide: Symmetric encryption keys - used for Block Storage (cinder) volume encryption, ephemeral disk encryption, and Object Storage (swift) encryption, among others. Asymmetric keys and certificates - used for glance image signing and verification, among others. OpenStack Key Manager integrates with the Block Storage (cinder), Networking (neutron), and Compute (nova) components. 1.1. OpenStack Key Manager workflow The following diagram shows the workflow that OpenStack Key Manager uses to manage secrets for your environment. 1.2. OpenStack Key Manager encryption types Secrets such as certificates, API keys, and passwords, can be stored in an encrypted blob in the barbican database or directly in a secure storage system. You can use a simple crypto plugin or PKCS#11 crypto plugin to encrypt secrets. To store the secrets as an encrypted blob in the barbican database, the following options are available: Simple crypto plugin - The simple crypto plugin is enabled by default and uses a single symmetric key to encrypt all secret payloads. This key is stored in plain text in the barbican.conf file, so it is important to prevent unauthorized access to this file. PKCS#11 crypto plugin - The PKCS#11 crypto plugin encrypts secrets with project-specific key encryption keys (pKEK), which are stored in the barbican database. These project-specific pKEKs are encrypted by a main key-encryption-key (MKEK), which is stored in a hardware security module (HSM). All encryption and decryption operations take place in the HSM, rather than in-process memory. The PKCS#11 plugin communicates with the HSM through the PKCS#11 API. Because the encryption is done in secure hardware, and a different pKEK is used per project, this option is more secure than the simple crypto plugin. Red Hat supports the PKCS#11 back end with any of the following HSMs. Device Supported in release High Availability (HA) support ATOS Trustway Proteccio NetHSM 16.0+ 16.1+ Entrust nShield Connect HSM 16.0+ Not supported Thales Luna Network HSM 16.1 (Technology Preview) 16.1 (Technology Preview) Note Regarding high availability (HA) options: The barbican service runs within Apache and is configured by director to use HAProxy for high availability. HA options for the back end layer will depend on the back end being used. For example, for simple crypto, all the barbican instances have the same encryption key in the config file, resulting in a simple HA configuration. 1.2.1. Configuring multiple encryption mechanisms You can configure a single instance of Barbican to use more than one back end. When this is done, you must specify a back end as the global default back end. You can also specify a default back end per project. If no mapping exists for a project, the secrets for that project are stored using the global default back end. For example, you can configure Barbican to use both the Simple crypto and PKCS#11 plugins. If you set Simple crypto as the global default, then all projects use that back end. You can then specify which projects use the PKCS#11 back end by setting PKCS#11 as the preferred back end for that project. If you decide to migrate to a new back end, you can keep the original available while enabling the new back end as the global default or as a project-specific back end. As a result, the old secrets remain available through the old back end, and new secrets are stored in the new global default back end. 1.3. Deploying Key Manager To deploy OpenStack Key Manager, first create an environment file for the barbican service and redeploy the overcloud with additional environment files. You then add users to the creator role to create and edit barbican secrets or to create encrypted volumes that store their secret in barbican. Note This procedure configures barbican to use the simple_crypto back end. Additional back ends are available, such as PKCS#11 which requires a different configuration, and different heat template files depending on which HSM is used. Other back ends such as KMIP, Hashicorp Vault and DogTag are not supported. Prerequisites Overcloud is deployed and running Procedure On the undercloud node, create an environment file for barbican. The BarbicanSimpleCryptoGlobalDefault sets this plugin as the global default plugin. You can also add the following options to the environment file: BarbicanPassword - Sets a password for the barbican service account. BarbicanWorkers - Sets the number of workers for barbican::wsgi::apache . Uses '%{::processorcount}' by default. BarbicanDebug - Enables debugging. BarbicanPolicies - Defines policies to configure for barbican. Uses a hash value, for example: { barbican-context_is_admin: { key: context_is_admin, value: 'role:admin' } } . This entry is then added to /etc/barbican/policy.json . Policies are described in detail in a later section. BarbicanSimpleCryptoKek - The Key Encryption Key (KEK) is generated by director, if none is specified. Add the following files to the openstack overcloud deploy command, without removing previously added role, template or environment files from the script: /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml /home/stack/templates/configure-barbican.yaml Re-run the deployment script to apply changes to your deployment: Retrieve the id of the creator role: Note You will not see the creator role unless OpenStack Key Manager (barbican) is installed. Assign a user to the creator role and specify the relevant project. In this example, a user named user1 in the project_a project is added to the creator role: Verification Create a test secret. For example: Retrieve the payload for the secret you just created: 1.4. Viewing Key Manager policies Barbican uses policies to determine which users are allowed to perform actions against the secrets, such as adding or deleting keys. To implement these controls, keystone project roles such as creator you created earlier, are mapped to barbican internal permissions. As a result, users assigned to those project roles receive the corresponding barbican permissions. The default policy is defined in code and typically does not require any amendments. If policy changes have not been made, you can view the default policy using the existing container in your environment. If changes have been made to the default policy, and you would like to see the defaults, use a separate system to pull the openstack-barbican-api container first. Prerequisites OpenStack Key Manager is deployed and running Procedure Use your Red Hat credentials to log in to podman: Pull the openstack-barbican-api container: Generate the policy file in the current working directory: Verification Review the barbican-policy.yaml file to check the policies used by barbican. The policy is implemented by four different roles that define how a user interacts with secrets and secret metadata. A user receives these permissions by being assigned to a particular role: admin The admin role can read, create, edit and delete secrets across all projects. creator The creator role can read, create, edit, and delete secrets that are in the project for which the creator is scoped. observer The observer role can only read secrets. audit The audit role can only read metadata. The audit role can not read secrets. For example, the following entries list the admin , observer , and creator keystone roles for each project. On the right, notice that they are assigned the role:admin , role:observer , and role:creator permissions: These roles can also be grouped together by barbican. For example, rules that specify admin_or_creator can apply to members of either rule:admin or rule:creator . Further down in the file, there are secret:put and secret:delete actions. To their right, notice which roles have permissions to execute these actions. In the following example, secret:delete means that only admin and creator role members can delete secret entries. In addition, the rule states that users in the admin or creator role for that project can delete a secret in that project. The project match is defined by the secret_project_match rule, which is also defined in the policy.
[ "cat /home/stack/templates/configure-barbican.yaml parameter_defaults: BarbicanSimpleCryptoGlobalDefault: true", "openstack overcloud deploy --timeout 100 --templates /usr/share/openstack-tripleo-heat-templates --stack overcloud --libvirt-type kvm --ntp-server clock.redhat.com -e /home/stack/containers-prepare-parameter.yaml -e /home/stack/templates/config_lvm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/templates/network/network-environment.yaml -e /home/stack/templates/hostnames.yml -e /home/stack/templates/nodes_data.yaml -e /home/stack/templates/extra_templates.yaml -e /home/stack/container-parameters-with-barbican.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml -e /home/stack/templates/configure-barbican.yaml --log-file overcloud_deployment_38.log", "openstack role show creator +-----------+----------------------------------+ | Field | Value | +-----------+----------------------------------+ | domain_id | None | | id | 4e9c560c6f104608948450fbf316f9d7 | | name | creator | +-----------+----------------------------------+", "openstack role add --user user1 --project project_a 4e9c560c6f104608948450fbf316f9d7", "openstack secret store --name testSecret --payload 'TestPayload' +---------------+------------------------------------------------------------------------------------+ | Field | Value | +---------------+------------------------------------------------------------------------------------+ | Secret href | https://192.168.123.163/key-manager/v1/secrets/4cc5ffe0-eea2-449d-9e64-b664d574be53 | | Name | testSecret | | Created | None | | Status | None | | Content types | None | | Algorithm | aes | | Bit length | 256 | | Secret type | opaque | | Mode | cbc | | Expiration | None | +---------------+------------------------------------------------------------------------------------+", "openstack secret get https://192.168.123.163/key-manager/v1/secrets/4cc5ffe0-eea2-449d-9e64-b664d574be53 --payload +---------+-------------+ | Field | Value | +---------+-------------+ | Payload | TestPayload | +---------+-------------+", "login username: ******** password: ********", "pull registry.redhat.io/rhosp-rhel8/openstack-barbican-api:16.2", "run -it registry.redhat.io/rhosp-rhel8/openstack-barbican-api:16.2 oslopolicy-policy-generator --namespace barbican > barbican-policy.yaml", "# #\"admin\": \"role:admin\" # #\"observer\": \"role:observer\" # #\"creator\": \"role:creator\"", "secret:delete\": \"rule:admin_or_creator and rule:secret_project_match\"" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/manage_secrets_with_openstack_key_manager/assembly-deploying-configuring-key-manager_rhosp
18.3. Installation in Non-Interactive Line Mode
18.3. Installation in Non-Interactive Line Mode If the inst.cmdline option was specified as a boot option in your parameter file (see Section 21.4, "Parameters for Kickstart Installations" ) or the cmdline option was specified in your Kickstart file (see Chapter 27, Kickstart Installations ), Anaconda starts with non-interactive text line mode. In this mode, all necessary information must be provided in the Kickstart file. The installation program will not allow user interaction and it will stop if any required commands are missing.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-graphical-installation-line-mode-s390
Release notes for Red Hat build of OpenJDK 21.0.1
Release notes for Red Hat build of OpenJDK 21.0.1 Red Hat build of OpenJDK 21 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.1/index
B.31.2. RHSA-2011:0291 - Moderate: java-1.5.0-ibm security update
B.31.2. RHSA-2011:0291 - Moderate: java-1.5.0-ibm security update Updated java-1.5.0-ibm packages that fix one security issue are now available for Red Hat Enterprise Linux 4 Extras, and Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The IBM 1.5.0 Java release includes the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. CVE-2010-4476 A denial of service flaw was found in the way certain strings were converted to Double objects. A remote attacker could use this flaw to cause Java based applications to hang, for example, if they parsed Double values in a specially-crafted HTTP request. All users of java-1.5.0-ibm are advised to upgrade to these updated packages, containing the IBM 1.5.0 SR12-FP3 Java release. All running instances of IBM Java must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0291
Chapter 3. Assigning and managing unique numeric attribute values
Chapter 3. Assigning and managing unique numeric attribute values Some entry attribute values require a unique number, such as uidNumber and gidNumber . Using the Distributed Numeric Assignment (DNA) plug-in, you can configure Directory Server to generate and assign unique numbers from the configured range of numbers automatically to specified attributes. Note The DNA plug-in does not guarantee attribute uniqueness . If you manually assigned a value from the range that the plug-in manages, the plug-in does not check if the value is unique. With DNA plug-in, you can effectively avoid replication conflicts by setting different ranges for different local DNA plug-in instances on suppliers. For example, supplier A can assign numbers from 1 to 1000, and supplier B can assign numbers from 1001 to 2000. This ensures that each supplier is using a truly unique set of numbers. 3.1. About Dynamic Number Assignments The DNA plug-in assigns a range of available numbers that instance can issue. Two attributes define the range definition: the server available number (the botton value of the range) and its maximum value (the upper value of the range). You set the initial bottom value when you configure the plug-in. Later, the plug-in udates this bottom value. By breaking the available numbers into separate ranges on each replica, the servers can continually assign numbers without overlapping with each other. 3.1.1. Filters, searches, and target entries The server performs a sorted search internally to verify if another server has already taken the specified range, requiring the managed attribute to have an equality index with the proper ordering matching rule. The DNA plug-in is always applied to a specific area of the directory tree (the scope ) and specific entry types within that subtree (the filter ). Important The DNA plug-in works only on a single database, unable to manage number assignments for multiple databases. The DNA plug-in uses the sort control to check whether a value has been manually allocated outside of the DNA plug-in. However, this validation using the sort control works only on a single database. Additional resources Defining a default index that applies to all newly created databases 3.1.2. Ranges and assigning numbers The Directory Server can generate attribute values using several different methods: When adding a user entry to the directory with an object class that requires the unique-number attribute but without the attribute present in the entry. If the entry matches the DNA filter, it activates the DNA plug-in to assign a value to the managed attribute. This option works only when the DNA plug-in is configured to assign unique values to a single attribute. When using a magic number as a template value for the managed attribute. This magic number is a template value for the managed attribute, something outside the server's range, a number or even a word. When an entry is added with the magic value and the entry is within the configured scope and filter of the DNA plug-in, the magic value automatically triggers the plug-in to generate a new value. For example, you can add zero ( 0 ) as a magic number using the ldapmodify utility: The DNA plug-in only generates new, unique values. If an entry is added or modified to use a specific value for an attribute controlled by the DNA plug-in, the plug-in does not overwrite this specific value. 3.1.3. Multiple attributes in the same range The DNA plug-in can assign unique numbers to a single or multiple attribute types from a single range of unique numbers. This offers multiple options for assigning unique numbers to attributes: A single number for a single attribute type from a unique range. The same unique number for two attributes in one entry. Two different attributes assigned two different numbers from the same range of unique numbers. In many cases, it is sufficient to have a unique number assigned per attribute type. For example, when assigning an employeeID to a new employee entry, it is crucial to ensure each employee entry receives a unique employeeID . However, you can assign unique numbers from the same range of numbers to multiple attributes. For example, when assigning uidNumber and gidNumber to a posixAccount entry, the DNA plug-in can assign the same number to both attributes. To achieve this, pass both managed attributes to the modify operation and specify the magic value ( 0 ) using the ldapmodify utility: When the DNA plug-in handles multiple attributes, it can assign a unique value to only one attribute if the object class permits only one. For example, the posixGroup object class allows gidNumber but not uidNumber . If the DNA plug-in manages both uidNumber and gidNumber , it assigns a unique number for gidNumber from the uidNumber and gidNumber attribute range when creating a posixGroup entry. Sharing a pool for all managed attributes ensures consistent assignment of unique numbers, preventing conflicts where uidNumber and gidNumber on different entries end up with the same number from separate ranges. If the DNA plug-in manages multiple attributes, it assigns the same value to all of them in a single modify operation. However, in cases, where an entry does not allow each type of attribute defined for the range, or an entry allows all of the attributes types defined, but only a subset of the attributes require the unique value, you must assign different numbers from the same range by performing separate modify operations. For example: Example 3.1. Example. DNA and Unique Bank Account Numbers Example Bank wants to use the same unique number for a customer's primaryAccount and customerID attributes. The Example Bank administrator configured the DNA plug-in to assign unique values for both attributes from the same range. Additionally, the bank wants to assign numbers for secondary accounts from the same range as the customer ID and primary account numbers, but these numbers cannot be the same as the primary account numbers. The Example Bank administrator configures the DNA plug-in to also manage the secondaryAccount attribute, but will only add the secondaryAccount attribute to an entry after the entry is created and the primaryAccount and customerID attributes are assigned. This ensures that primaryAccount and customerID share the same unique number, and any secondaryAccount numbers are entirely unique but still from the same range of numbers. 3.2. Syntax of the DNA plug-in The Distributed Numeric Assignment (DNA) plug-in itself is a container entry with the distinguished name (DN) cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config . Each DNA entry under the DNA plug-in entry defines a new managed range for the DNA plug-in. Therefore, to configure new managed ranges for the DNA plug-in, create entries under the container entry. For example, if you want the plug-in to manage uidNumber attribute in entries, create the cn= Account UIDs ,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config configuration entry where you define ranges and other plug-in settings. The plug-in syntax varies on whether you configure the plug-in for the use on a single server or across multiple servers in a replication topology. DNA plug-in syntax for a single server If you use the plug-in on a single server, a basic DNA configuration entry defines the following attributes: dnaType Defines the attribute which value the plug-in manages. dnaScope Defines the entry (DN) the plug-in uses as the base to search for entries. dnaFilter Defines the search filter the plug-in uses to identify entries to manage. dnaNextValue Defines the available value that the plug-in assigns after an entry is created. The following is the example of the DNA configuration entry on a single server for a single attribute type: DNA plug-in syntax for servers in replication topology To configure distributed numeric assignments on multiple suppliers, the configuration entry must also contain the following information to share and transfer ranges: dnaMaxValue Defines the maximum number that the server can assign. dnaThreshold Defines the threshold where the range is low enough to trigger a range transfer. If dnaThreshold is not set, the default value is 1 . dnaRangeRequestTimeout Defines a timeout period that a server waits for an answer from another server when requesting a range transfer. If the server does not receive the range within this time period, the range transfer request goes to another server. By default, the value is set to 10 seconds. dnaSharedCfgDN Defines a configuration entry DN which is shared among all supplier servers, which stores the range information for each supplier. dnaNextRange Defines the specific number range that a server assigns to the manages attribute. The dnaNextRange value shows the available range for transfer and is managed automatically by the plug-in as ranges are assigned or used by the server. This range has not yet been assigned to another server and is still available for its local Directory Server to use. The following is the example of the DNA configuration entry on a supplier in replication topology: For the full list of attributes you can use in the DNA configuration entry, see Distributed Numeric Assignment plug-in attributes . With no dnaNextRange attribute value configured, Directory Server automatically assigns ranges using the dnaMaxValue value as the upper limit for the range. You must explicitly set the dnaNextRange attribute, if you want Directory Server to assign a separate, specific range to other servers. Each supplier keeps a track of its current range in a separate configuration entry which contains information about the range and the connection settings. This entry is a child of the location in dnaSharedCfgDN . Directory Server replicates the configuration entry to all other suppliers, so each supplier can check that configuration to find a server to contact for a new range. For example: 3.3. Creating a DNA plug-in configuration entry on a supplier using the command line If you want a supplier to assign unique numbers to a managed attribute, create a DNA plug-in configuration entry for each configuration you want to apply. A DNA plug-in configuration entry is a subentry under the cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config plug-in container entry. In a multi-supplier environment, each supplier manages its own range of values. The ranges are replicated between suppliers and each supplier is aware of which supplier manages which range. Later, a supplier uses this information to request a range transfer from another supplier if the first supplier is running out of range values. The following example creates a new DNA plug-in configuration entry on a supplier by using the dsconf utility. Prerequisites You have root permissions. Procedure Create the DNA configuration entry on a supplier: The command creates the DNA plug-in configuration that sets a unique value to the uidNumber attribute instead of the 99999 magic value in all newly created posixAccount entries under ou=People,dc=example,dc=com . The supplier sets values up to 1300 and requests a range transfer from the second supplier when reaches the value 1200 . If the second supplier is unresponsive for 60 seconds, the first supplier requests the range transfer from the third supplier. NOTE If you create the configuration entry for a server without replication or for a supplier in one-supplier environment, set only the --type , --filter , --scope , ---value options. For details about the DNA plug-in configuration attributes, see Distributed Numeric Assignment Plug-in Attributes and Syntax of the DNA plug-in sections. Optional: Create the configuration entry that is shared among all supplier server: Enable the DNA plug-in: Verification View the configuration entry details: Additional resources Multiple attributes in the same range 3.4. Creating a DNA plug-in configuration entry on a supplier using the web console If you want Directory Server to assign unique numbers to a managed attribute, create a DNA plug-in configuration entry for each configuration you want to apply. Directory Server stores such plug-in configuration entries under the cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config plug-in container entry. In a multi-supplier environment, each supplier manages its own range of values. The ranges are replicated between suppliers and each supplier is aware of which supplier manages which range. Later, a supplier uses this information to request a range transfer from another supplier if the first supplier is running out of range values. Prerequisites You are logged in to the web console. For more details, see Logging in to the Directory Server by using the web console . Procedure Select the Directory Server instance. Open the Plugins menu and select the DNA plug-in from the list. Click Add Config button to start the configuration of the new plug-in configuration entry. On the DNA Configuration tab, set the fields. For example, you want the plug-in to set a unique value to the uidNumber attribute instead of the 99999 magic value in all newly created posixAccount entries under ou=People,dc=example,dc=com . In addition, you want the supplier to set values up to 1300 and request a range transfer from the second supplier when the unique value reaches the value 1200 . In this case, set the following fields: Config Name to Account UIDs DNA Managed Attributes to uidNumber Filter to "(objectclass=posixAccount)" Subtree Scope to ou=People,dc=example,dc=com Value to 1 Max Value to 1300 Magic Regeneration Value to 99999 Threshold to 100 Range Request Timeout to 60 NOTE If you create the configuration entry for a server without replication or for a supplier in one-supplier environment, set only the DNA Managed Attributes , Filter , Subtree Scope , and Value fields. Go to the Shared Config Settings tab and set the Shared Config Entry DN field to, for example, cn=Account UIDs,ou=Ranges,dc=example,dc=com . This shared configuration entry contains information which server to contact for the range transfer if the current server is out of unique values. Click the Save Config button to save the plug-in settings. Toggle the switch to the Plugin is enabled position to enable the plug-in. Additional resources Distributed Numeric Assignment plug-in attributes . Multiple attributes in the same range
[ "ldapmodify -D \"cn=Directory Manager\" -W -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: add objectClass: top objectClass: person objectClass: posixAccount uid: jsmith *cn: John Smith uidNumber: 0 gidNumber: 0", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: uidNumber uidNumber: 0 - add:gidNumber gidNumber: 0", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: uidNumber idNumber: 0 ^D ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: employeeId employeeId: magic", "dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config objectClass: top objectClass: dnaPluginConfig cn: Account UIDs dnatype: uidNumber dnafilter: (objectclass=posixAccount) dnascope: ou=people,dc=example,dc=com dnaNextValue: 1", "dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config objectClass: top objectClass: dnaPluginConfig cn: Account UIDs dnatype: uidNumber dnafilter: (objectclass=posixAccount) dnascope: ou=people,dc=example,dc=com dnanextvalue: 1 dnaMaxValue: 1300 dnasharedcfgdn: cn=Account UIDs,ou=ranges,dc=example,dc=com dnathreshold: 100 dnaRangeRequestTimeout: 60 dnaNextRange: 1301-2301", "dn: dnaHostname=ldap1.example.com+dnaPortNum=389,cn=Account UIDs,ou=Ranges,dc=example,dc=com objectClass: dnaSharedConfig objectClass: top dnahostname: ldap1.example.com dnaPortNum: 389 dnaSecurePortNum: 636 dnaRemainingValues: 1000", "dsconf -D \"cn=Directory Manager\" instance_name plugin dna config \"Account UIDs\" add --type uidNumber --filter \"(objectclass=posixAccount)\" --scope ou=People,dc=example,dc=com --next-value 1 --max-value 1300 --shared-config-entry \"cn=Account UIDs,ou=Ranges,dc=example,dc=com\" --threshold 100 --range-request-timeout 60 --magic-regen 99999 Successfully created the cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: ou=Ranges,dc=example,dc=com changetype: add objectclass: top objectclass: extensibleObject objectclass: organizationalUnit ou: Ranges - dn: cn=Account UIDs,ou=Ranges,dc=example,dc=com changetype: add objectclass: top objectclass: extensibleObject cn: Account UIDs", "dsconf -D \"cn=Directory Manager\" instance_name plugin dna enable Enabled plugin 'Distributed Numeric Assignment Plugin'", "dsconf -D \"cn=Directory Manager\" instance_name plugin dna config \"Account UIDs\" show dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config cn: Account UIDs dnaFilter: \"(objectclass=posixAccount)\" dnaInterval: 1 dnaMagicRegen: 99999 dnaMaxValue: 1300 dnaNextValue: 1 dnaRangeRequestTimeout: 60 dnaScope: ou=People,dc=example,dc=com dnaSharedCfgDN: cn=Account UIDs,ou=Ranges,dc=example,dc=com dnaThreshold: 100 dnaType: uidNumber objectClass: top objectClass: dnaPluginConfig" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_directory_attributes_and_values/assigning-and-managing-unique-numeric-attribute-values_managing-directory-attributes-and-values
Chapter 42. File Systems
Chapter 42. File Systems File system DAX is now available for ext4 and XFS as a Technology Preview Starting with Red Hat Enterprise Linux 7.3, Direct Access (DAX) provides, as a Technology Preview, a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that supports DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1274459) pNFS block layout is now available As a Technology Preview, Red Hat Enterprise Linux clients can now mount pNFS shares with the block layout feature. Note that Red Hat recommends using the pNFS SCSI layout instead, which is similar to block layout but easier to use. (BZ#1111712) pNFS SCSI layout is now available for client and server Client and server support for parallel NFS (pNFS) SCSI layouts is provided as a Technology Preview starting with Red Hat Enterprise Linux 7.3. Building on the work of block layouts, the pNFS layout is defined across SCSI devices and contains sequential series of fixed-size blocks as logical units that must be capable of supporting SCSI persistent reservations. The Logical Unit (LU) devices are identified by their SCSI device identification, and fencing is handled through the assignment of reservations. (BZ#1305092) OverlayFS OverlayFS is a type of union file system. It allows the user to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. Refer to the kernel file Documentation/filesystems/overlayfs.txt for additional information. OverlayFS remains a Technology Preview in Red Hat Enterprise Linux 7.5 under most circumstances. As such, the kernel will log warnings when this technology is activated. Full support is available for OverlayFS when used with Docker under the following restrictions: OverlayFS is only supported for use as a Docker graph driver. Its use can only be supported for container COW content, not for persistent storage. Any persistent storage must be placed on non-OverlayFS volumes to be supported. Only default Docker configuration can be used; that is, one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. On Red Hat Enterprise Linux 7.3 and earlier, SELinux must be enabled and in enforcing mode on the physical machine, but must be disabled in the container when performing container separation, that is the /etc/sysconfig/docker file must not contain --selinux-enabled . Starting with Red Hat Enterprise Linux 7.4, OverlayFS supports SELinux security labels, and you can enable SELinux support for containers by specifying --selinux-enabled in /etc/sysconfig/docker . The OverlayFS kernel ABI and userspace behavior are not considered stable, and may see changes in future updates. In order to make the yum and rpm utilities work properly inside the container, the user should be using the yum-plugin-ovl packages. Note that OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. Note that XFS file systems must be created with the -n ftype=1 option enabled for use as an overlay. With the rootfs and any file systems created during system installation, set the --mkfsoptions=-n ftype=1 parameters in the Anaconda kickstart. When creating a new file system after the installation, run the # mkfs -t xfs -n ftype=1 /PATH/TO/DEVICE command. To determine whether an existing file system is eligible for use as an overlay, run the # xfs_info /PATH/TO/DEVICE | grep ftype command to see if the ftype=1 option is enabled. There are also several known issues associated with OverlayFS as of Red Hat Enterprise Linux 7.5 release. For details, see Non-standard behavior in the Documentation/filesystems/overlayfs.txt file. (BZ#1206277) Btrfs file system The Btrfs (B-Tree) file system is available as a Technology Preview in Red Hat Enterprise Linux 7. Red Hat Enterprise Linux 7.4 introduced the last planned update to this feature. Btrfs has been deprecated, which means Red Hat will not be moving Btrfs to a fully supported feature and it will be removed in a future major release of Red Hat Enterprise Linux. (BZ#1477977) New package: ima-evm-utils The ima-evm-utils package provides utilities to label the file system and verify the integrity of your system at run time using the Integrity Measurement Architecture (IMA) and Extended Verification Module (EVM) features. These utilities enable you to monitor if files have been accidentally or maliciously altered. The ima-evm-utils package is now available as a Technology Preview. (BZ#1384450)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_file_systems
Chapter 5. Image service with multiple stores
Chapter 5. Image service with multiple stores The Red Hat OpenStack Platform (RHOSP) Image service (glance) supports using multiple stores with distributed edge architecture so that you can have an image pool at every edge site. 5.1. Image copies on multiple stores When you use multiple stores with distributed edge architecture, you can have an image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites. The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores. For more information about locations, see Understanding the location of images . With a RADOS Block Device (RBD) image pool at every edge site, you can boot Virtual Machines (VMs) quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can boot VMs from volumes and have live migration. For more information about layering with Ceph RBD, see Ceph block device layering in the Block Device Guide . When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central image store to edge sites to save time during instance launch. 5.2. Requirements of storage edge architecture Refer to the following requirements to use images with edge sites: A copy of each image must exist in the Image service (glance) at the central location. You must copy images from an edge site to the central location before you can copy them to other edge sites. You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage. For each site, you must assign the same value to the NovaComputeAvailabilityZone and CinderStorageAvailabilityZone parameters. 5.3. Multiple Block Storage service stores You can configure multiple Block Storage service (cinder) back ends for the Image service (glance), and configure volume types for each back end by using the enabled_backends and cinder_volume_type configuration options in the glance-api.conf file. While you can associate one back end with multiple volume types in the Block Storage service, you can only associate a back end with one volume type in the Image service. The Image service generates a location URL with a unique ID to identify which back end an image is stored in. When you upgrade from using a single Block Storage service store to using multiple Block Storage service stores, the location URLs for legacy images are updated from cinder://volume-id to cinder://store-name/volume-id . Example 1: New deployment with two volume types The following example shows the Image service configuration in the glance-api.conf file for a new deployment when the Block Storage service has two volume types, for example, fast and slow : Example 2: Upgrade from single store to multiple stores The following example shows the Image service configuration in the glance-api.conf file for an upgrade from a single Block Storage service store to multiple stores. You must identify the default_volume_type that is used in cinder.conf , and update the cinder_volume_type in glance-api.conf to match: 5.4. Importing an image to multiple stores Use the interoperable image import workflow to import image data into multiple Red Hat Ceph Storage clusters. You can import images to the Image service (glance) that are available on the local file system or through a web server. If you import an image from a web server, the image can be imported into multiple stores at once. If the image is not available on a web server, you can import the image from a local file system into the central store and then copy it to additional stores. For more information, see Copy an existing image to multiple stores . Use the Image service command-line client for image management. Important Always store an image copy on the central site, even if there are no instances using the image at the central location. For more information about importing images into the Image service, see the Deploying a Distributed Compute Node architecture guide. 5.4.1. Managing image import failures You can manage failures of the image import operation by using the --allow-failure parameter: If the value of the --allow-failure parameter to true , the image status becomes active after the first store successfully imports the data. This is the default setting. You can view a list of stores that failed to import the image data by using the os_glance_failed_import image property. If you set the value of the --allow-failure parameter to false , the image status only becomes active after all specified stores successfully import the data. Failure of any store to import the image data results in an image status of failed . The image is not imported into any of the specified stores. 5.4.2. Importing image data to multiple stores Because the default setting of the --allow-failure parameter is true , you do not need to include the parameter in the command if it is acceptable for some stores to fail to import the image data. Note This procedure does not require all stores to successfully import the image data. Procedure Import image data to multiple, specified stores: Replace <image-name> with the name of the image you want to import. Replace <uri> with the URI of the image. Replace <store-1> , <store-2> , and <store-3> with the names of the stores to which you want to import the image data. Alternatively, replace --stores with --all-stores true to upload the image to all the stores. Note The glance image-create-via-import command, which automatically converts the QCOW2 image to RAW format, works only with the web-download method. The glance-direct method is available, but it works only in deployments with a configured shared file system. 5.4.3. Importing image data to multiple stores without failure This procedure requires all stores to successfully import the image data. Procedure Import image data to multiple, specified stores: Replace <image-name> with the name of the image you want to import. Replace <uri> with the URI of the image. Replace <store-1> , <store-2> , and <store-3> with the names of stores to which you want to copy the image data. Alternatively, replace --stores with --all-stores true to upload the image to all the stores. Note With the --allow-failure parameter set to false , the Image service (glance) does not ignore stores that fail to import the image data. You can view the list of failed stores with the image property os_glance_failed_import . For more information, see Section 5.5, "Checking the progress of the image import operation" . Verify that the image data was added to specific stores: Replace <image-id> with the ID of the original existing image. The output displays a comma-delimited list of stores. 5.4.4. Importing image data to a single store You can use the Image service (glance) to import image data to a single store. Procedure Import image data to a single store: Replace <image-name> with the name of the image you want to import. Replace <uri> with the URI of the image. Replace <store> with the name of the store to which you want to copy the image data. Note If you do not include the options of --stores , --all-stores , or --store in the command, the Image service creates the image in the central store. Verify that the image data was added to specific store: Replace <image-id> with the ID of the original existing image. The output displays a comma-delimited list of stores. 5.5. Checking the progress of the image import operation The interoperable image import workflow sequentially imports image data into stores. The size of the image, the number of stores, and the network speed between the central site and the edge sites impact how long it takes for the image import operation to complete. You can follow the progress of the image import by looking at two image properties, which appear in notifications sent during the image import operation: The os_glance_importing_to_stores property lists the stores that have not imported the image data. At the beginning of the import, all requested stores show up in the list. Each time a store successfully imports the image data, the Image service removes the store from the list. The os_glance_failed_import property lists the stores that fail to import the image data. This list is empty at the beginning of the image import operation. Note In the following procedure, the environment has three Red Hat Ceph Storage clusters: the central store and two stores at the edge, dcn0 and dcn1 . Procedure Verify that the image data was added to specific stores: Replace <image-id> with the ID of the original existing image. The output displays a comma-delimited list of stores similar to the following example snippet: Monitor the status of the image import operation. When you precede a command with watch , the command output refreshes every two seconds. Replace <image-id> with the ID of the original existing image. The status of the operation changes as the image import operation progresses: Output that shows that an image failed to import resembles the following example: After the operation completes, the status changes to active: 5.6. Copying an existing image to multiple stores This feature enables you to copy existing images using Red Hat OpenStack Image service (glance) image data into multiple Red Hat Ceph Storage stores at the edge by using the interoperable image import workflow. Note The image must be present at the central site before you copy it to any edge sites. Only the image owner or administrator can copy existing images to newly added stores. You can copy existing image data either by setting --all-stores to true or by specifying specific stores to receive the image data. The default setting for the --all-stores option is false . If --all-stores is false , you must specify which stores receive the image data by using --stores <store-1>,<store-2> . If the image data is already present in any of the specified stores, the request fails. If you set all-stores to true , and the image data already exists in some of the stores, then those stores are excluded from the list. After you specify which stores receive the image data, the Image service (glance) copies data from the central site to a staging area. Then the Image service imports the image data by using the interoperable image import workflow. For more information, see Importing an image to multiple stores . Use the Image service command-line client for image management. Important Red Hat recommends that administrators carefully avoid closely timed image copy requests. Two closely timed copy-image operations for the same image causes race conditions and unexpected results. Existing image data remains as it is, but copying data to new stores fails. 5.6.1. Copying an image to all stores Use the following procedure to copy image data to all available stores. Procedure Copy image data to all available stores: Replace <image-id> with the name of the image you want to copy. Confirm that the image data successfully replicated to all available stores: For information about how to check the status of the image import operation, see Section 5.5, "Checking the progress of the image import operation" . 5.6.2. Copying an image to specific stores Use the following procedure to copy image data to specific stores. Procedure Copy image data to specific stores: Replace <image-id> with the name of the image you want to copy. Replace <store-1> and <store-2> with the names of the stores to which you want to copy the image data. Confirm that the image data successfully replicated to the specified stores: For information about how to check the status of the image import operation, see Section 5.5, "Checking the progress of the image import operation" . 5.7. Deleting an image from a specific store Delete an existing image copy on a specific store by using the Red Hat OpenStack Platform (RHOSP) Image service (glance). Use the Image service command-line client for image management. Procedure Delete an image from a specific store: Replace <store-id> with the name of the store on which the image copy should be deleted. Replace <image-id> with the ID of the image you want to delete. Warning The glance image-delete command permanently deletes the image across all the sites. All image copies are deleted, as well as the image instance and metadata. 5.8. Listing image locations and location properties Although an image can be present on multiple sites, there is only a single Universal Unique Identifier (UUID) for a given image. The image metadata contains the locations of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site and the two edge sites. Note Use the Image service (glance) command-line client instead of the OpenStack command-line client for image management. However, use the openstack image show command to list image location properties. The glance image-show command output does not include locations. Procedure Show the sites on which a copy of the image exists: In the example, the image is present on the central site, the default_backend , and on the two edge sites dcn1 and dcn2 . Alternatively, you can run the glance image-list command with the --include-stores option to see the sites where the images exist: List the image location properties to show the details of each location: The image properties show the different Ceph RBD URIs for the location of each image. In the example, the central image location URI is: The URI is composed of the following data: 79b70c32-df46-4741-93c0-8118ae2ae284 corresponds to the central Ceph FSID. Each Ceph cluster has a unique FSID. The default value for all sites is images , which corresponds to the Ceph pool on which the images are stored. 2bd882e7-1da0-4078-97fe-f1bb81f61b00 corresponds to the image UUID. The UUID is the same for a given image regardless of its location. The metadata shows the glance store to which this location maps. In this example, it maps to the default_backend , which is the central hub site.
[ "list of enabled stores identified by their property group name enabled_backends = fast:cinder, slow:cinder the default store, if not set glance-api service will not start [glance_store] default_backend = fast conf props for fast store instance [fast] rootwrap_config = /etc/glance/rootwrap.conf cinder_volume_type = glance-fast description = LVM based cinder store cinder_catalog_info = volumev2::publicURL cinder_store_auth_address = http://localhost/identity/v3 cinder_store_user_name = glance cinder_store_password = admin cinder_store_project_name = service conf props for slow store instance [slow] rootwrap_config = /etc/glance/rootwrap.conf cinder_volume_type = glance-slow description = NFS based cinder store cinder_catalog_info = volumev2::publicURL cinder_store_auth_address = http://localhost/identity/v3 cinder_store_user_name = glance cinder_store_password = admin cinder_store_project_name = service", "new configuration in glance [DEFAULT] enabled_backends = old:cinder, new:cinder [glance_store] default_backend = new rootwrap_config = /etc/glance/rootwrap.conf cinder_volume_type = glance-new description = LVM based cinder store cinder_catalog_info = volumev2::publicURL cinder_store_auth_address = http://localhost/identity/v3 cinder_store_user_name = glance cinder_store_password = admin cinder_store_project_name = service", "glance image-create-via-import --container-format bare --name <image-name> --import-method web-download --uri <uri> --stores <store-1>,<store-2>,<store-3>", "glance image-create-via-import --container-format bare --name <image-name> --import-method web-download --uri <uri> --stores <store-1>,<store-2>,<store-3>", "glance image-show <image-id> | grep stores", "glance image-create-via-import --container-format bare --name <image-name> --import-method web-download --uri <uri> --store <store>", "glance image-show <image-id> | grep stores", "glance image-show <image-id>", "| os_glance_failed_import | | os_glance_importing_to_stores | central,dcn0,dcn1 | status | importing", "watch glance image-show <image-id>", "| os_glance_failed_import | | os_glance_importing_to_stores | dcn0,dcn1 | status | importing", "| os_glance_failed_import | dcn0 | os_glance_importing_to_stores | dcn1 | status | importing", "| os_glance_failed_import | dcn0 | os_glance_importing_to_stores | | status | active", "glance image-import <image-id> --all-stores true --import-method copy-image", "glance image-list --include-stores", "glance image-import <image-id> --stores <store-1>,<store-2> --import-method copy-image", "glance image-list --include-stores", "glance stores-delete --store <store-id> <image-id>", "glance image-show ID | grep \"stores\" | stores | default_backend,dcn1,dcn2", "glance image-list --include-stores | ID | Name | Stores | 2bd882e7-1da0-4078-97fe-f1bb81f61b00 | cirros | default_backend,dcn1,dcn2", "openstack image show ID -c properties | properties | (--- cut ---) locations='[{'url': 'rbd://79b70c32-df46-4741-93c0-8118ae2ae284/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://63df2767-8ddb-4e06-8186-8c155334f487/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'dcn1'}}, {'url': 'rbd://1b324138-2ef9-4ef9-bd9e-aa7e6d6ead78/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'dcn2'}}]', (--- cut --)", "rbd://79b70c32-df46-4741-93c0-8118ae2ae284/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'default_backend'}}" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_images/assembly_image-service-with-multiple-stores_glance-creating-images
Chapter 7. Troubleshooting
Chapter 7. Troubleshooting 7.1. Troubleshooting the installer workflow Prior to troubleshooting the installation environment, it is critical to understand the overall flow of the installer-provisioned installation on bare metal. The diagrams below provide a troubleshooting flow with a step-by-step breakdown for the environment. Workflow 1 of 4 illustrates a troubleshooting workflow when the install-config.yaml file has errors or the Red Hat Enterprise Linux CoreOS (RHCOS) images are inaccessible. Troubleshooting suggestions can be found at Troubleshooting install-config.yaml . Workflow 2 of 4 illustrates a troubleshooting workflow for bootstrap VM issues , bootstrap VMs that cannot boot up the cluster nodes , and inspecting logs . When installing an OpenShift Container Platform cluster without the provisioning network, this workflow does not apply. Workflow 3 of 4 illustrates a troubleshooting workflow for cluster nodes that will not PXE boot . If installing using RedFish Virtual Media, each node must meet minimum firmware requirements for the installer to deploy the node. See Firmware requirements for installing with virtual media in the Prerequisites section for additional details. Workflow 4 of 4 illustrates a troubleshooting workflow from a non-accessible API to a validated installation . 7.2. Troubleshooting install-config.yaml The install-config.yaml configuration file represents all of the nodes that are part of the OpenShift Container Platform cluster. The file contains the necessary options consisting of but not limited to apiVersion , baseDomain , imageContentSources and virtual IP addresses. If errors occur early in the deployment of the OpenShift Container Platform cluster, the errors are likely in the install-config.yaml configuration file. Procedure Use the guidelines in YAML-tips . Verify the YAML syntax is correct using syntax-check . Verify the Red Hat Enterprise Linux CoreOS (RHCOS) QEMU images are properly defined and accessible via the URL provided in the install-config.yaml . For example: USD curl -s -o /dev/null -I -w "%{http_code}\n" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.<architecture>.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7 If the output is 200 , there is a valid response from the webserver storing the bootstrap VM image. 7.3. Bootstrap VM issues The OpenShift Container Platform installation program spawns a bootstrap node virtual machine, which handles provisioning the OpenShift Container Platform cluster nodes. Procedure About 10 to 15 minutes after triggering the installation program, check to ensure the bootstrap VM is operational using the virsh command: USD sudo virsh list Id Name State -------------------------------------------- 12 openshift-xf6fq-bootstrap running Note The name of the bootstrap VM is always the cluster name followed by a random set of characters and ending in the word "bootstrap." If the bootstrap VM is not running after 10-15 minutes, troubleshoot why it is not running. Possible issues include: Verify libvirtd is running on the system: USD systemctl status libvirtd ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-03 21:21:07 UTC; 3 weeks 5 days ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 9850 (libvirtd) Tasks: 20 (limit: 32768) Memory: 74.8M CGroup: /system.slice/libvirtd.service ├─ 9850 /usr/sbin/libvirtd If the bootstrap VM is operational, log in to it. Use the virsh console command to find the IP address of the bootstrap VM: USD sudo virsh console example.com Connected to domain example.com Escape character is ^] Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa) 4.3 SSH host key: SHA256:BRWJktXZgQQRY5zjuAV0IKZ4WM7i4TiUyMVanqu9Pqg (ED25519) SSH host key: SHA256:7+iKGA7VtG5szmk2jB5gl/5EZ+SNcJ3a2g23o0lnIio (ECDSA) SSH host key: SHA256:DH5VWhvhvagOTaLsYiVNse9ca+ZSW/30OOMed8rIGOc (RSA) ens3: fd35:919d:4042:2:c7ed:9a9f:a9ec:7 ens4: 172.22.0.2 fe80::1d05:e52e:be5d:263f localhost login: Important When deploying an OpenShift Container Platform cluster without the provisioning network, you must use a public IP address and not a private IP address like 172.22.0.2 . After you obtain the IP address, log in to the bootstrap VM using the ssh command: Note In the console output of the step, you can use the IPv6 IP address provided by ens3 or the IPv4 IP provided by ens4 . USD ssh [email protected] If you are not successful logging in to the bootstrap VM, you have likely encountered one of the following scenarios: You cannot reach the 172.22.0.0/24 network. Verify the network connectivity between the provisioner and the provisioning network bridge. This issue might occur if you are using a provisioning network. ` You cannot reach the bootstrap VM through the public network. When attempting to SSH via baremetal network, verify connectivity on the provisioner host specifically around the baremetal network bridge. You encountered Permission denied (publickey,password,keyboard-interactive) . When attempting to access the bootstrap VM, a Permission denied error might occur. Verify that the SSH key for the user attempting to log in to the VM is set within the install-config.yaml file. 7.3.1. Bootstrap VM cannot boot up the cluster nodes During the deployment, it is possible for the bootstrap VM to fail to boot the cluster nodes, which prevents the VM from provisioning the nodes with the RHCOS image. This scenario can arise due to: A problem with the install-config.yaml file. Issues with out-of-band network access when using the baremetal network. To verify the issue, there are three containers related to ironic : ironic ironic-inspector Procedure Log in to the bootstrap VM: USD ssh [email protected] To check the container logs, execute the following: [core@localhost ~]USD sudo podman logs -f <container_name> Replace <container_name> with one of ironic or ironic-inspector . If you encounter an issue where the control plane nodes are not booting up from PXE, check the ironic pod. The ironic pod contains information about the attempt to boot the cluster nodes, because it attempts to log in to the node over IPMI. Potential reason The cluster nodes might be in the ON state when deployment started. Solution Power off the OpenShift Container Platform cluster nodes before you begin the installation over IPMI: USD ipmitool -I lanplus -U root -P <password> -H <out_of_band_ip> power off 7.3.2. Inspecting logs When experiencing issues downloading or accessing the RHCOS images, first verify that the URL is correct in the install-config.yaml configuration file. Example of internal webserver hosting RHCOS images bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.<architecture>.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.<architecture>.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0 The coreos-downloader container downloads resources from a webserver or from the external quay.io registry, whichever the install-config.yaml configuration file specifies. Verify that the coreos-downloader container is up and running and inspect its logs as needed. Procedure Log in to the bootstrap VM: USD ssh [email protected] Check the status of the coreos-downloader container within the bootstrap VM by running the following command: [core@localhost ~]USD sudo podman logs -f coreos-downloader If the bootstrap VM cannot access the URL to the images, use the curl command to verify that the VM can access the images. To inspect the bootkube logs that indicate if all the containers launched during the deployment phase, execute the following: [core@localhost ~]USD journalctl -xe [core@localhost ~]USD journalctl -b -f -u bootkube.service Verify all the pods, including dnsmasq , mariadb , httpd , and ironic , are running: [core@localhost ~]USD sudo podman ps If there are issues with the pods, check the logs of the containers with issues. To check the logs of the ironic service, run the following command: [core@localhost ~]USD sudo podman logs ironic 7.4. Cluster nodes will not PXE boot When OpenShift Container Platform cluster nodes will not PXE boot, execute the following checks on the cluster nodes that will not PXE boot. This procedure does not apply when installing an OpenShift Container Platform cluster without the provisioning network. Procedure Check the network connectivity to the provisioning network. Ensure PXE is enabled on the NIC for the provisioning network and PXE is disabled for all other NICs. Verify that the install-config.yaml configuration file includes the rootDeviceHints parameter and boot MAC address for the NIC connected to the provisioning network. For example: control plane node settings Worker node settings 7.5. Unable to discover new bare metal hosts using the BMC In some cases, the installation program will not be able to discover the new bare metal hosts and issue an error, because it cannot mount the remote virtual media share. For example: ProvisioningError 51s metal3-baremetal-controller Image provisioning failed: Deploy step deploy.deploy failed with BadRequestError: HTTP POST https://<bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia returned code 400. Base.1.8.GeneralError: A general error has occurred. See ExtendedInfo for more information Extended information: [ { "Message": "Unable to mount remote share https://<ironic_address>/redfish/boot-<uuid>.iso.", "MessageArgs": [ "https://<ironic_address>/redfish/boot-<uuid>.iso" ], "[email protected]": 1, "MessageId": "IDRAC.2.5.RAC0720", "RelatedProperties": [ "#/Image" ], "[email protected]": 1, "Resolution": "Retry the operation.", "Severity": "Informational" } ]. In this situation, if you are using virtual media with an unknown certificate authority, you can configure your baseboard management controller (BMC) remote file share settings to trust an unknown certificate authority to avoid this error. Note This resolution was tested on OpenShift Container Platform 4.11 with Dell iDRAC 9 and firmware version 5.10.50. 7.6. The API is not accessible When the cluster is running and clients cannot access the API, domain name resolution issues might impede access to the API. Procedure Hostname Resolution: Check the cluster nodes to ensure they have a fully qualified domain name, and not just localhost.localdomain . For example: USD hostname If a hostname is not set, set the correct hostname. For example: USD hostnamectl set-hostname <hostname> Incorrect Name Resolution: Ensure that each node has the correct name resolution in the DNS server using dig and nslookup . For example: USD dig api.<cluster_name>.example.com ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 <<>> api.<cluster_name>.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37551 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 866929d2f8e8563582af23f05ec44203d313e50948d43f60 (good) ;; QUESTION SECTION: ;api.<cluster_name>.example.com. IN A ;; ANSWER SECTION: api.<cluster_name>.example.com. 10800 IN A 10.19.13.86 ;; AUTHORITY SECTION: <cluster_name>.example.com. 10800 IN NS <cluster_name>.example.com. ;; ADDITIONAL SECTION: <cluster_name>.example.com. 10800 IN A 10.19.14.247 ;; Query time: 0 msec ;; SERVER: 10.19.14.247#53(10.19.14.247) ;; WHEN: Tue May 19 20:30:59 UTC 2020 ;; MSG SIZE rcvd: 140 The output in the foregoing example indicates that the appropriate IP address for the api.<cluster_name>.example.com VIP is 10.19.13.86 . This IP address should reside on the baremetal network. 7.7. Troubleshooting worker nodes that cannot join the cluster Installer-provisioned clusters deploy with a DNS server that includes a DNS entry for the api-int.<cluster_name>.<base_domain> URL. If the nodes within the cluster use an external or upstream DNS server to resolve the api-int.<cluster_name>.<base_domain> URL and there is no such entry, worker nodes might fail to join the cluster. Ensure that all nodes in the cluster can resolve the domain name. Procedure Add a DNS A/AAAA or CNAME record to internally identify the API load balancer. For example, when using dnsmasq, modify the dnsmasq.conf configuration file: USD sudo nano /etc/dnsmasq.conf address=/api-int.<cluster_name>.<base_domain>/<IP_address> address=/api-int.mycluster.example.com/192.168.1.10 address=/api-int.mycluster.example.com/2001:0db8:85a3:0000:0000:8a2e:0370:7334 Add a DNS PTR record to internally identify the API load balancer. For example, when using dnsmasq, modify the dnsmasq.conf configuration file: USD sudo nano /etc/dnsmasq.conf ptr-record=<IP_address>.in-addr.arpa,api-int.<cluster_name>.<base_domain> ptr-record=10.1.168.192.in-addr.arpa,api-int.mycluster.example.com Restart the DNS server. For example, when using dnsmasq, execute the following command: USD sudo systemctl restart dnsmasq These records must be resolvable from all the nodes within the cluster. 7.8. Cleaning up installations In the event of a failed deployment, remove the artifacts from the failed attempt before attempting to deploy OpenShift Container Platform again. Procedure Power off all bare metal nodes prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off Remove all old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done Remove the following from the clusterconfigs directory to prevent Terraform from failing: USD rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.json 7.9. Issues with creating the registry When creating a disconnected registry, you might encounter a "User Not Authorized" error when attempting to mirror the registry. This error might occur if you fail to append the new authentication to the existing pull-secret.txt file. Procedure Check to ensure authentication is successful: USD /usr/local/bin/oc adm release mirror \ -a pull-secret-update.json --from=USDUPSTREAM_REPO \ --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} \ --to=USDLOCAL_REG/USDLOCAL_REPO Note Example output of the variables used to mirror the install images: UPSTREAM_REPO=USD{RELEASE_IMAGE} LOCAL_REG=<registry_FQDN>:<registry_port> LOCAL_REPO='ocp4/openshift4' The values of RELEASE_IMAGE and VERSION were set during the Retrieving OpenShift Installer step of the Setting up the environment for an OpenShift installation section. After mirroring the registry, confirm that you can access it in your disconnected environment: USD curl -k -u <user>:<password> https://registry.example.com:<registry_port>/v2/_catalog {"repositories":["<Repo_Name>"]} 7.10. Miscellaneous issues 7.10.1. Addressing the runtime network not ready error After the deployment of a cluster you might receive the following error: The Cluster Network Operator is responsible for deploying the networking components in response to a special object created by the installer. It runs very early in the installation process, after the control plane (master) nodes have come up, but before the bootstrap control plane has been torn down. It can be indicative of more subtle installer issues, such as long delays in bringing up control plane (master) nodes or issues with apiserver communication. Procedure Inspect the pods in the openshift-network-operator namespace: USD oc get all -n openshift-network-operator NAME READY STATUS RESTARTS AGE pod/network-operator-69dfd7b577-bg89v 0/1 ContainerCreating 0 149m On the provisioner node, determine that the network configuration exists: USD kubectl get network.config.openshift.io cluster -oyaml apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNetwork: - 172.30.0.0/16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networkType: OVNKubernetes If it does not exist, the installer did not create it. To determine why the installer did not create it, execute the following: USD openshift-install create manifests Check that the network-operator is running: USD kubectl -n openshift-network-operator get pods Retrieve the logs: USD kubectl -n openshift-network-operator logs -l "name=network-operator" On high availability clusters with three or more control plane (master) nodes, the Operator will perform leader election and all other Operators will sleep. For additional details, see Troubleshooting . 7.10.2. Addressing the "No disk found with matching rootDeviceHints" error message After you deploy a cluster, you might receive the following error message: No disk found with matching rootDeviceHints To address the No disk found with matching rootDeviceHints error message, a temporary workaround is to change the rootDeviceHints to minSizeGigabytes: 300 . After you change the rootDeviceHints settings, boot the CoreOS and then verify the disk information by using the following command: USD udevadm info /dev/sda If you are using DL360 Gen 10 servers, be aware that they have an SD-card slot that might be assigned the /dev/sda device name. If no SD card is present in the server, it can cause conflicts. Ensure that the SD card slot is disabled in the server's BIOS settings. If the minSizeGigabytes workaround is not fulfilling the requirements, you might need to revert rootDeviceHints back to /dev/sda . This change allows ironic images to boot successfully. An alternative approach to fixing this problem is by using the serial ID of the disk. However, be aware that finding the serial ID can be challenging and might make the configuration file less readable. If you choose this path, ensure that you gather the serial ID using the previously documented command and incorporate it into your configuration. 7.10.3. Cluster nodes not getting the correct IPv6 address over DHCP If the cluster nodes are not getting the correct IPv6 address over DHCP, check the following: Ensure the reserved IPv6 addresses reside outside the DHCP range. In the IP address reservation on the DHCP server, ensure the reservation specifies the correct DHCP Unique Identifier (DUID). For example: # This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC id:00:03:00:01:18:db:f2:8c:d5:9f,openshift-master-1,[2620:52:0:1302::6] Ensure that route announcements are working. Ensure that the DHCP server is listening on the required interfaces serving the IP address ranges. 7.10.4. Cluster nodes not getting the correct hostname over DHCP During IPv6 deployment, cluster nodes must get their hostname over DHCP. Sometimes the NetworkManager does not assign the hostname immediately. A control plane (master) node might report an error such as: This error indicates that the cluster node likely booted without first receiving a hostname from the DHCP server, which causes kubelet to boot with a localhost.localdomain hostname. To address the error, force the node to renew the hostname. Procedure Retrieve the hostname : [core@master-X ~]USD hostname If the hostname is localhost , proceed with the following steps. Note Where X is the control plane node number. Force the cluster node to renew the DHCP lease: [core@master-X ~]USD sudo nmcli con up "<bare_metal_nic>" Replace <bare_metal_nic> with the wired connection corresponding to the baremetal network. Check hostname again: [core@master-X ~]USD hostname If the hostname is still localhost.localdomain , restart NetworkManager : [core@master-X ~]USD sudo systemctl restart NetworkManager If the hostname is still localhost.localdomain , wait a few minutes and check again. If the hostname remains localhost.localdomain , repeat the steps. Restart the nodeip-configuration service: [core@master-X ~]USD sudo systemctl restart nodeip-configuration.service This service will reconfigure the kubelet service with the correct hostname references. Reload the unit files definition since the kubelet changed in the step: [core@master-X ~]USD sudo systemctl daemon-reload Restart the kubelet service: [core@master-X ~]USD sudo systemctl restart kubelet.service Ensure kubelet booted with the correct hostname: [core@master-X ~]USD sudo journalctl -fu kubelet.service If the cluster node is not getting the correct hostname over DHCP after the cluster is up and running, such as during a reboot, the cluster will have a pending csr . Do not approve a csr , or other issues might arise. Addressing a csr Get CSRs on the cluster: USD oc get csr Verify if a pending csr contains Subject Name: localhost.localdomain : USD oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -text Remove any csr that contains Subject Name: localhost.localdomain : USD oc delete csr <wrong_csr> 7.10.5. Routes do not reach endpoints During the installation process, it is possible to encounter a Virtual Router Redundancy Protocol (VRRP) conflict. This conflict might occur if a previously used OpenShift Container Platform node that was once part of a cluster deployment using a specific cluster name is still running but not part of the current OpenShift Container Platform cluster deployment using that same cluster name. For example, a cluster was deployed using the cluster name openshift , deploying three control plane (master) nodes and three worker nodes. Later, a separate install uses the same cluster name openshift , but this redeployment only installed three control plane (master) nodes, leaving the three worker nodes from a deployment in an ON state. This might cause a Virtual Router Identifier (VRID) conflict and a VRRP conflict. Get the route: USD oc get route oauth-openshift Check the service endpoint: USD oc get svc oauth-openshift NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oauth-openshift ClusterIP 172.30.19.162 <none> 443/TCP 59m Attempt to reach the service from a control plane (master) node: [core@master0 ~]USD curl -k https://172.30.19.162 { "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"", "reason": "Forbidden", "details": { }, "code": 403 Identify the authentication-operator errors from the provisioner node: USD oc logs deployment/authentication-operator -n openshift-authentication-operator Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"225c5bd5-b368-439b-9155-5fd3c0459d98", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting" Solution Ensure that the cluster name for every deployment is unique, ensuring no conflict. Turn off all the rogue nodes which are not part of the cluster deployment that are using the same cluster name. Otherwise, the authentication pod of the OpenShift Container Platform cluster might never start successfully. 7.10.6. Failed Ignition during Firstboot During the Firstboot, the Ignition configuration may fail. Procedure Connect to the node where the Ignition configuration failed: Failed Units: 1 machine-config-daemon-firstboot.service Restart the machine-config-daemon-firstboot service: [core@worker-X ~]USD sudo systemctl restart machine-config-daemon-firstboot.service 7.10.7. NTP out of sync The deployment of OpenShift Container Platform clusters depends on NTP synchronized clocks among the cluster nodes. Without synchronized clocks, the deployment may fail due to clock drift if the time difference is greater than two seconds. Procedure Check for differences in the AGE of the cluster nodes. For example: USD oc get nodes NAME STATUS ROLES AGE VERSION master-0.cloud.example.com Ready master 145m v1.26.0 master-1.cloud.example.com Ready master 135m v1.26.0 master-2.cloud.example.com Ready master 145m v1.26.0 worker-2.cloud.example.com Ready worker 100m v1.26.0 Check for inconsistent timing delays due to clock drift. For example: USD oc get bmh -n openshift-machine-api master-1 error registering master-1 ipmi://<out_of_band_ip> USD sudo timedatectl Local time: Tue 2020-03-10 18:20:02 UTC Universal time: Tue 2020-03-10 18:20:02 UTC RTC time: Tue 2020-03-10 18:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: no NTP service: active RTC in local TZ: no Addressing clock drift in existing clusters Create a Butane config file including the contents of the chrony.conf file to be delivered to the nodes. In the following example, create 99-master-chrony.bu to add the file to the control plane nodes. You can modify the file for worker nodes or repeat this procedure for the worker role. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.13.0 metadata: name: 99-master-chrony labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | server <NTP_server> iburst 1 stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 Replace <NTP_server> with the IP address of the NTP server. Use Butane to generate a MachineConfig object file, 99-master-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-master-chrony.bu -o 99-master-chrony.yaml Apply the MachineConfig object file: USD oc apply -f 99-master-chrony.yaml Ensure the System clock synchronized value is yes : USD sudo timedatectl Local time: Tue 2020-03-10 19:10:02 UTC Universal time: Tue 2020-03-10 19:10:02 UTC RTC time: Tue 2020-03-10 19:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: yes NTP service: active RTC in local TZ: no To setup clock synchronization prior to deployment, generate the manifest files and add this file to the openshift directory. For example: USD cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yaml Then, continue to create the cluster. 7.11. Reviewing the installation After installation, ensure the installer deployed the nodes and pods successfully. Procedure When the OpenShift Container Platform cluster nodes are installed appropriately, the following Ready state is seen within the STATUS column: USD oc get nodes NAME STATUS ROLES AGE VERSION master-0.example.com Ready master,worker 4h v1.26.0 master-1.example.com Ready master,worker 4h v1.26.0 master-2.example.com Ready master,worker 4h v1.26.0 Confirm the installer deployed all pods successfully. The following command removes any pods that are still running or have completed as part of the output. USD oc get pods --all-namespaces | grep -iv running | grep -iv complete
[ "curl -s -o /dev/null -I -w \"%{http_code}\\n\" http://webserver.example.com:8080/rhcos-44.81.202004250133-0-qemu.<architecture>.qcow2.gz?sha256=7d884b46ee54fe87bbc3893bf2aa99af3b2d31f2e19ab5529c60636fbd0f1ce7", "sudo virsh list", "Id Name State -------------------------------------------- 12 openshift-xf6fq-bootstrap running", "systemctl status libvirtd", "● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-03-03 21:21:07 UTC; 3 weeks 5 days ago Docs: man:libvirtd(8) https://libvirt.org Main PID: 9850 (libvirtd) Tasks: 20 (limit: 32768) Memory: 74.8M CGroup: /system.slice/libvirtd.service ├─ 9850 /usr/sbin/libvirtd", "sudo virsh console example.com", "Connected to domain example.com Escape character is ^] Red Hat Enterprise Linux CoreOS 43.81.202001142154.0 (Ootpa) 4.3 SSH host key: SHA256:BRWJktXZgQQRY5zjuAV0IKZ4WM7i4TiUyMVanqu9Pqg (ED25519) SSH host key: SHA256:7+iKGA7VtG5szmk2jB5gl/5EZ+SNcJ3a2g23o0lnIio (ECDSA) SSH host key: SHA256:DH5VWhvhvagOTaLsYiVNse9ca+ZSW/30OOMed8rIGOc (RSA) ens3: fd35:919d:4042:2:c7ed:9a9f:a9ec:7 ens4: 172.22.0.2 fe80::1d05:e52e:be5d:263f localhost login:", "ssh [email protected]", "ssh [email protected]", "[core@localhost ~]USD sudo podman logs -f <container_name>", "ipmitool -I lanplus -U root -P <password> -H <out_of_band_ip> power off", "bootstrapOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-qemu.<architecture>.qcow2.gz?sha256=9d999f55ff1d44f7ed7c106508e5deecd04dc3c06095d34d36bf1cd127837e0c clusterOSImage: http://<ip:port>/rhcos-43.81.202001142154.0-openstack.<architecture>.qcow2.gz?sha256=a1bda656fa0892f7b936fdc6b6a6086bddaed5dafacedcd7a1e811abb78fe3b0", "ssh [email protected]", "[core@localhost ~]USD sudo podman logs -f coreos-downloader", "[core@localhost ~]USD journalctl -xe", "[core@localhost ~]USD journalctl -b -f -u bootkube.service", "[core@localhost ~]USD sudo podman ps", "[core@localhost ~]USD sudo podman logs ironic", "bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC", "bootMACAddress: 24:6E:96:1B:96:90 # MAC of bootable provisioning NIC", "ProvisioningError 51s metal3-baremetal-controller Image provisioning failed: Deploy step deploy.deploy failed with BadRequestError: HTTP POST https://<bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia returned code 400. Base.1.8.GeneralError: A general error has occurred. See ExtendedInfo for more information Extended information: [ { \"Message\": \"Unable to mount remote share https://<ironic_address>/redfish/boot-<uuid>.iso.\", \"MessageArgs\": [ \"https://<ironic_address>/redfish/boot-<uuid>.iso\" ], \"[email protected]\": 1, \"MessageId\": \"IDRAC.2.5.RAC0720\", \"RelatedProperties\": [ \"#/Image\" ], \"[email protected]\": 1, \"Resolution\": \"Retry the operation.\", \"Severity\": \"Informational\" } ].", "hostname", "hostnamectl set-hostname <hostname>", "dig api.<cluster_name>.example.com", "; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el8 <<>> api.<cluster_name>.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37551 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 866929d2f8e8563582af23f05ec44203d313e50948d43f60 (good) ;; QUESTION SECTION: ;api.<cluster_name>.example.com. IN A ;; ANSWER SECTION: api.<cluster_name>.example.com. 10800 IN A 10.19.13.86 ;; AUTHORITY SECTION: <cluster_name>.example.com. 10800 IN NS <cluster_name>.example.com. ;; ADDITIONAL SECTION: <cluster_name>.example.com. 10800 IN A 10.19.14.247 ;; Query time: 0 msec ;; SERVER: 10.19.14.247#53(10.19.14.247) ;; WHEN: Tue May 19 20:30:59 UTC 2020 ;; MSG SIZE rcvd: 140", "sudo nano /etc/dnsmasq.conf", "address=/api-int.<cluster_name>.<base_domain>/<IP_address> address=/api-int.mycluster.example.com/192.168.1.10 address=/api-int.mycluster.example.com/2001:0db8:85a3:0000:0000:8a2e:0370:7334", "sudo nano /etc/dnsmasq.conf", "ptr-record=<IP_address>.in-addr.arpa,api-int.<cluster_name>.<base_domain> ptr-record=10.1.168.192.in-addr.arpa,api-int.mycluster.example.com", "sudo systemctl restart dnsmasq", "ipmitool -I lanplus -U <user> -P <password> -H <management_server_ip> power off", "for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done", "rm -rf ~/clusterconfigs/auth ~/clusterconfigs/terraform* ~/clusterconfigs/tls ~/clusterconfigs/metadata.json", "/usr/local/bin/oc adm release mirror -a pull-secret-update.json --from=USDUPSTREAM_REPO --to-release-image=USDLOCAL_REG/USDLOCAL_REPO:USD{VERSION} --to=USDLOCAL_REG/USDLOCAL_REPO", "UPSTREAM_REPO=USD{RELEASE_IMAGE} LOCAL_REG=<registry_FQDN>:<registry_port> LOCAL_REPO='ocp4/openshift4'", "curl -k -u <user>:<password> https://registry.example.com:<registry_port>/v2/_catalog {\"repositories\":[\"<Repo_Name>\"]}", "`runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network`", "oc get all -n openshift-network-operator", "NAME READY STATUS RESTARTS AGE pod/network-operator-69dfd7b577-bg89v 0/1 ContainerCreating 0 149m", "kubectl get network.config.openshift.io cluster -oyaml", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNetwork: - 172.30.0.0/16 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networkType: OVNKubernetes", "openshift-install create manifests", "kubectl -n openshift-network-operator get pods", "kubectl -n openshift-network-operator logs -l \"name=network-operator\"", "No disk found with matching rootDeviceHints", "udevadm info /dev/sda", "This is a dnsmasq dhcp reservation, 'id:00:03:00:01' is the client id and '18:db:f2:8c:d5:9f' is the MAC Address for the NIC id:00:03:00:01:18:db:f2:8c:d5:9f,openshift-master-1,[2620:52:0:1302::6]", "Failed Units: 2 NetworkManager-wait-online.service nodeip-configuration.service", "[core@master-X ~]USD hostname", "[core@master-X ~]USD sudo nmcli con up \"<bare_metal_nic>\"", "[core@master-X ~]USD hostname", "[core@master-X ~]USD sudo systemctl restart NetworkManager", "[core@master-X ~]USD sudo systemctl restart nodeip-configuration.service", "[core@master-X ~]USD sudo systemctl daemon-reload", "[core@master-X ~]USD sudo systemctl restart kubelet.service", "[core@master-X ~]USD sudo journalctl -fu kubelet.service", "oc get csr", "oc get csr <pending_csr> -o jsonpath='{.spec.request}' | base64 --decode | openssl req -noout -text", "oc delete csr <wrong_csr>", "oc get route oauth-openshift", "oc get svc oauth-openshift", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oauth-openshift ClusterIP 172.30.19.162 <none> 443/TCP 59m", "[core@master0 ~]USD curl -k https://172.30.19.162", "{ \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403", "oc logs deployment/authentication-operator -n openshift-authentication-operator", "Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"openshift-authentication-operator\", Name:\"authentication-operator\", UID:\"225c5bd5-b368-439b-9155-5fd3c0459d98\", APIVersion:\"apps/v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from \"IngressStateEndpointsDegraded: All 2 endpoints for oauth-server are reporting\"", "Failed Units: 1 machine-config-daemon-firstboot.service", "[core@worker-X ~]USD sudo systemctl restart machine-config-daemon-firstboot.service", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0.cloud.example.com Ready master 145m v1.26.0 master-1.cloud.example.com Ready master 135m v1.26.0 master-2.cloud.example.com Ready master 145m v1.26.0 worker-2.cloud.example.com Ready worker 100m v1.26.0", "oc get bmh -n openshift-machine-api", "master-1 error registering master-1 ipmi://<out_of_band_ip>", "sudo timedatectl", "Local time: Tue 2020-03-10 18:20:02 UTC Universal time: Tue 2020-03-10 18:20:02 UTC RTC time: Tue 2020-03-10 18:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: no NTP service: active RTC in local TZ: no", "variant: openshift version: 4.13.0 metadata: name: 99-master-chrony labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | server <NTP_server> iburst 1 stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony", "butane 99-master-chrony.bu -o 99-master-chrony.yaml", "oc apply -f 99-master-chrony.yaml", "sudo timedatectl", "Local time: Tue 2020-03-10 19:10:02 UTC Universal time: Tue 2020-03-10 19:10:02 UTC RTC time: Tue 2020-03-10 19:36:53 Time zone: UTC (UTC, +0000) System clock synchronized: yes NTP service: active RTC in local TZ: no", "cp chrony-masters.yaml ~/clusterconfigs/openshift/99_masters-chrony-configuration.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0.example.com Ready master,worker 4h v1.26.0 master-1.example.com Ready master,worker 4h v1.26.0 master-2.example.com Ready master,worker 4h v1.26.0", "oc get pods --all-namespaces | grep -iv running | grep -iv complete" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-troubleshooting
Chapter 2. Architectures
Chapter 2. Architectures Red Hat Enterprise Linux 8.9 is distributed with the kernel version 4.18.0-513.5.1, which provides support for the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures . For a list of available subscriptions, see Subscription Utilization on the Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/architectures
5.8. Changing the CD for a Virtual Machine
5.8. Changing the CD for a Virtual Machine You can change the CD accessible to a virtual machine while that virtual machine is running, using ISO images that have been uploaded to the data domain of the virtual machine's cluster. See Uploading Images to a Data Storage Domain in the Administration Guide for details. Changing the CD for a Virtual Machine Click Compute Virtual Machines and select a running virtual machine. Click More Actions ( ), then click Change CD . Select an option from the drop-down list: Select an ISO file from the list to eject the CD currently accessible to the virtual machine and mount that ISO file as a CD. Select [Eject] from the list to eject the CD currently accessible to the virtual machine. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/changing_the_cd_for_a_virtual_machine
Chapter 1. Red Hat Quay release notes
Chapter 1. Red Hat Quay release notes The following sections detail y and z stream release information. 1.1. RHBA-2025:1079 - Red Hat Quay 3.13.4 release Issued 2025-02-20 Red Hat Quay release 3.13.4 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2025:1079 advisory. 1.2. RHBA-2025:0301 - Red Hat Quay 3.13.3 release Issued 2025-01-20 Red Hat Quay release 3.13.3 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2025:0301 advisory. 1.2.1. Red Hat Quay 3.13.3 bug fixes PROJQUAY-8336 . Previously, when using Red Hat Quay with managed Quay and Clair PostgreSQL databases, Red Hat Advanced Cluster Security would scan all running Quay pods and report High Image Vulnerability in Quay PostgreSQL database and Clair PostgreSQL database . This issue has been resolved. 1.3. RHBA-2024:10967 - Red Hat Quay 3.13.2 release Issued 2024-12-17 Red Hat Quay release 3.13.2 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2024:10967 advisory. 1.3.1. Red Hat Quay 3.13.2 new features With this release, a pull-through cache organization can now be created when using the Red Hat Quay v2 UI. For more information, see Using Red Hat Quay to proxy a remote registry . 1.3.2. Red Hat Quay 3.13.2 known issues When using the pull-through proxy feature in Red Hat Quay with quota management enabled, and the organization quota fills up, it is expected that Red Hat Quay removes the least recently used image to free up space for new cached entries. However, images pull by digest are not evicted automatically when the quota is exceeded, which causes subsequent pull attempts to return a Quota has been exceeded on namespace error. As a temporary workaround, you can run a bash shell inside of the Red Hat Quay database pod to make digest-pulled images visible for eviction with the following setting: update tag set hidden = 0; . For more information, see PROJQUAY-8071 . 1.3.3. Red Hat Quay 3.13.2 bug fixes PROJQUAY-8273 , PROJQUAY-6474 . When deploying Red Hat Quay with an custom HorizontalPodAutoscaler component and then setting the component to managed: false in the QuayRegistry custom resource definition (CRD), the Red Hat Quay Operator continuously terminates and resets the minReplicas value to 2 for mirror and clair components. To work around this issue, see Using unmanaged Horizontal Pod Autoscalers . PROJQUAY-8208 . Previously, Red Hat Quay would return a 501 error on repository or organization creation with the authorization type was set to OIDC and restricted users were set. This issue has been resolved. PROJQUAY-8269 . Previously, on the {productnamne} UI, the OAuth scopes page suggested that scopes could be applied to robot accounts. This was not the case. Wording on the OAuth scopes page of the UI has been fixed. 1.4. RHBA-2024:9478 - Red Hat Quay 3.13.1 release Issued 2024-11-18 Red Hat Quay release 3.13.1 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2024:9478 advisory. 1.5. Information about upgrading to 3.13.1 Previously, when attempting to upgrade to Red Hat Quay 3.13, if FIPS mode was enabled for your OpenShift Container Platform cluster with Clair enabled, Clair would not function in your cluster. This issue was resolved in version 3.13.1. Upgrading to Red Hat Quay 3.13 automatically upgrades users to version 3.13.1 so that this issue is avoided. Additionally, if you are upgrading from 3.13 to 3.13.1 and FIPs was enabled, upgrading to 3.13.1 resolves the issue. ( PROJQUAY-8185 ) 1.5.1. Red Hat Quay 3.13.1 enhancements With the release of Red Hat Quay 3.13.1, Hitachi Content Platform (HCP) is now supported for use a storage backend. This allows organizations to leverage HCP for scalable, secure, and reliable object storage within their Red Hat Quay registry deployments. For more information, see HCP Object Storage . 1.5.2. Red Hat Quay 3.13.1 known issues When using Hitachi Content Platform for your object storage, attempting to push an image with a large layer to a Red Hat Quay registry results in the following error: An error occurred (NoSuchUpload) when calling the CompleteMultipartUpload operation: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed. This is a known issue and will be fixed in a future version of Red Hat Quay. 1.5.3. Red Hat Quay 3.13.1 bug fixes PROJQUAY-8185 . Previously, when attempting to upgrade Red Hat Quay on OpenShift Container Platform to 3.13 with FIPS mode enabled, the upgrade would fail for deploying using Clair. This issue has been resolved. Upgrading to 3.13.1 does not fail for Red Hat Quay on OpenShift Container Platform using Clair with FIPS mode enabled. PROJQUAY-8024 . Previously, using Hitachi HCP v9.7 as your storage provider would return errors when attempting to pull images. This issue has been resolved. PROJQUAY-5086 . Previously, Red Hat Quay on OpenShift Container Platform would produce information about horizontal pod autoscalers (HPAs) for some components (for example, Clair , Redis , PostgreSQL , and ObjectStorage ) when they were unmanaged by the Operator. This issue has been resolved and information about HPAs are not longer reported for unmanaged components. 1.6. RHBA-2024:8408 - Red Hat Quay 3.13.0 release Issued 2024-10-30 Red Hat Quay release 3.13 is now available with Clair 4.8. The bug fixes that are included in the update are listed in the RHBA-2024:8408 advisory. For the most recent compatibility matrix, see Quay Enterprise 3.x Tested Integrations . For information the release cadence of Red Hat Quay, see the Red Hat Quay Life Cycle Policy . 1.7. Red Hat Quay documentation changes The following documentation changes have been made with the Red Hat Quay 3.13 release: The Red Hat Quay Builders feature that was originally documented in the Using Red Hat Quay guide has been moved into a new, dedicated book titled " Builders and image automation ". The Red Hat Quay Builders feature that was originally documented in the Red Hat Quay Operator features has been moved into a new, dedicated book titled " Builders and image automation ". A new book titled " Securing Red Hat Quay " has been created. This book covers SSL and TLS for Red Hat Quay, and adding additional certificate authorities (CAs) to your deployment. More content will be added to this book in the future. A new book titled " Managing access and permissions " has been created. This book covers topics related to access controls, repository visibility, and robot accounts by using the UI and the API. More content will be added to this book in the future. 1.8. Upgrading to Red Hat Quay 3.13 With Red Hat Quay 3.13, the volumeSize parameter has been implemented for use with the clairpostgres component of the QuayRegistry custom resource definition (CRD). This replaces the volumeSize parameter that was previously used for the clair component of the same CRD. If your Red Hat Quay 3.12 QuayRegistry custom resource definition (CRD) implemented a volume override for the clair component, you must ensure that the volumeSize field is included under the clairpostgres component of the QuayRegistry CRD. Important Failure to move volumeSize from the clair component to the clairpostgres component will result in a failed upgrade to version 3.13. For example: spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size> For more information, see Upgrade Red Hat Quay . 1.9. Red Hat Quay new features and enhancements The following updates have been made to Red Hat Quay. 1.9.1. Red Hat Quay auto-pruning enhancements With the release of Red Hat Quay 3.10, a new auto-pruning feature was released. With that feature, Red Hat Quay administrators could set up auto-pruning policies on namespaces for both users and organizations so that image tags were automatically deleted based on specified criteria. In Red Hat Quay 3.11, this feature was enhanced so that auto-pruning policies could be set up on specified repositories. With Red Hat Quay 3.12, default auto-pruning policies default auto-pruning policies were made to be set up at the registry level on new and existing configurations, which saved Red Hat Quay administrators time, effort, and storage by enforcing registry-wide rules. With the release of Red Hat Quay 3.13, the following enhancements have been made to the auto-pruning feature. 1.9.1.1. Tag specification patterns in auto-pruning policies Previously, the Red Hat Quay auto-pruning feature could not target or exclude specific image tags. With the release of Red Hat Quay 3.13, it is now possible to specify a regular expression , or regex to match a subset of tags for both organization- and repository-level auto-pruning policies. This allows Red Hat Quay administrators more granular auto-pruning policies to target only certain image tags for removal. For more information, see Using regular expressions with auto-pruning . 1.9.1.2. Multiple auto-pruning policies Previously, Red Hat Quay only supported a single auto-pruning policy per organization and repository. With the release of Red Hat Quay 3.13, multiple auto-pruning policies can now be applied to an organization or a repository. These auto-pruning policies can be based on different tag naming (regex) patterns to cater for the different life cycles of images in the same repository or organization. This feature provides more flexibility when automating the image life cycle in your repository. Additional auto-pruning policies can be added on the Red Hat Quay v2 UI by clicking Add Policy on the Auto-Pruning Policies page. They can also be added by using the API. For more information about setting auto-prune policies, see Red Hat Quay auto-pruning overview . 1.9.2. Keyless authentication with robot accounts In versions of Red Hat Quay, robot account tokens were valid for the lifetime of the token unless deleted or regenerated. Tokens that do not expire have security implications for users who do not want to store long-term passwords or manage the deletion, or regeneration, or new authentication tokens. With Red Hat Quay 3.13, Red Hat Quay administrators are provided the ability to exchange Red Hat Quay robot account tokens for an external OIDC token. This allows robot accounts to leverage short-lived, or ephemeral tokens , that last one hour. Ephemeral tokens are refreshed regularly and can be used to authenticate individual transactions. This feature greatly enhances the security of your Red Hat Quay registry by mitigating the possibility of robot token exposure by removing the tokens after one hour. For more information, see Keyless authentication with robot accounts . 1.10. Red Hat Quay on OpenShift Container Platform new features and enhancements The following updates have been made to Red Hat Quay on OpenShift Container Platform. 1.10.1. Support for certificate-based authentication between Red Hat Quay and PostgreSQL With this release, support for certificate-based authentication between Red Hat Quay and PostgreSQL has been added. This allows Red Hat Quay administrators to supply their own SSL/TLS certificates that can be used for client-side authentication with PostgreSQL or CloudSQL. This provides enhanced security and allows for easier automation for your Red Hat Quay registry. For more information, see Certificate-based authentication between Red Hat Quay and SQL . 1.10.2. Red Hat Quay v2 UI enhancements The following enhancements have been made to the Red Hat Quay v2 UI. 1.10.2.1. Robot federation selection A new configuration page, Set robot federation , has been added to the Red Hat Quay v2 UI. This can be found by navigating to your organization or repository's robot account, clicking the menu kebab, and then clicking Set robot federation . This page is used when configuring keyless authentication with robot accounts, and allows you to add multiple OIDC providers to a single robot account. For more information, see Keyless authentication with robot accounts . 1.11. New Red Hat Quay configuration fields The following configuration fields have been added to Red Hat Quay 3.13. 1.11.1. Disabling pushes to the Red Hat Quay registry configuration field In some cases, a read-only option for Red Hat Quay is not possible since it requires inserting a service key and other manual configuration changes. With the release of Red Hat Quay 3.13, a new configuration field has been added: DISABLE_PUSHES . When DISABLE_PUSHES is set to true , users are unable to push images or image tags to the registry when using the CLI. Most other registry operations continue as normal when this feature is enabled by using the Red Hat Quay UI. For example, changing tags, editing a repository, robot account creation and deletion, user creation, and so on are all possible by using the UI. When DISABLE_PUSHES is set to true , the Red Hat Quay garbage collector is disabled. As a result, when PERMANENTLY_DELETE_TAGS is enabled, using the Red Hat Quay UI to permanently delete a tag does not result in the immediate deletion of a tag. Instead, the tag stays in the repository until DISABLE_PUSHES is set to false , which re-enables the garbage collector. Red Hat Quay administrators should be aware of this caveat when using DISABLE_PUSHES and PERMANENTLY_DELETE_TAGS together. This field might be useful in some situations such as when Red Hat Quay administrators want to calculate their registry's quota and disable image pushing until after calculation has completed. With this method, administrators can avoid putting putting the whole registry in read-only mode, which affects the database, so that most operations can still be done. Field Type Description DISABLE_PUSHES Boolean Disables pushes of new content to the registry while retaining all other functionality. Differs from read-only mode because database is not set as read-only . Defaults to false . Example DISABLE_PUSHES configuration field # ... DISABLE_PUSHES: true # ... 1.12. API endpoint enhancements 1.12.1. New autoPrunePolicy endpoints tagPattern and tagPatternMatches API parameters have been added to the following API endpoints: createOrganizationAutoPrunePolicy updateOrganizationAutoPrunePolicy createRepositoryAutoPrunePolicy updateRepositoryAutoPrunePolicy createUserAutoPrunePolicy updateUserAutoPrunePolicy These fields enhance the auto-pruning feature by allowing Red Hat Quay administrators more control over what images are pruned. The following table provides descriptions of these fields: Name Description Schema tagPattern optional Tags only matching this pattern (regex) will be pruned. string tagPatternMatches optional Determine whether pruned tags should or should not match the tagPattern. boolean For example API commands, see Red Hat Quay auto-pruning overview . 1.12.2. New federated robot token API endpoints The following API endpoints have been added for the keyless authentication with robot accounts feature: GET oauth2/federation/robot/token . Use this API endpoint to return an expiring robot token using the robot identity federation mechanism. POST /api/v1/organization/{orgname}/robots/{robot_shortname}/federation . Use this API endpoint to create a federation configuration for the specified organization robot. 1.13. Red Hat Quay 3.13 notable technical changes Clair now requires its PostgreSQL database to be version 15. For standalone Red Hat Quay deployments, administrators must manually migrate their database over from PostgreSQL version 13 to version 15. For more information about this procedure, see Upgrading the Clair PostgreSQL database . For Red Hat Quay on OpenShift Container Platform deployments, this update is automatically handled by the Operator so long as your Clair PostgreSQL database is currently using version 13. 1.14. Red Hat Quay 3.13 known issues and limitations The following sections note known issues and limitations for Red Hat Quay 3.13. 1.14.1. Clair vulnerability report known issue When pushing Suse Enterprise Linux Images with HIGH image vulnerabilities, Clair 4.8.0 does not report these vulnerabilities. This is a known issue and will be fixed in a future version of Red Hat Quay. 1.14.2. FIPS mode known issue If FIPS mode is enabled for your OpenShift Container Platform cluster and you use Clair, you must not upgrade the Red Hat Quay Operator to version 3.13. If you upgrade, Clair will not function in your cluster. ( PROJQUAY-8185 ) 1.14.3. Registry auto-pruning known issues The following known issues apply to the auto-pruning feature. 1.14.3.1. Policy prioritization known issue Currently, the auto-pruning feature prioritizes the following order when configured: Method: creation_date + organization wide Method: creation_date + repository wide Method: number_of_tags + organization wide Method: number_of_tags + repository wide This means that the auto-pruner first prioritizes, for example, an organization-wide policy set to expire tags by their creation date before it prunes images by the number of tags that it has. There is a known issue when configuring a registry-wide auto-pruning policy. If Red Hat Quay administrators configure a number_of_tags policy before a creation_date policy, it is possible to prune more than the intended set for the number_of_tags policy. This might lead to situations where a repository removes certain image tags unexpectedly. This is not an issue for organization or repository-wide auto-prune policies. This known issue only exists at the registry level. It will be fixed in a future version of Red Hat Quay. 1.14.3.2. Unrecognizable auto-prune tag patterns When creating an auto-prune policy, the pruner cannot recognize \b and \B patterns. This is a common behavior with regular expression patterns, wherein \b and \B match empty strings. Red Hat Quay administrators should avoid using regex patterns that use \B and \b to avoid this issue. ( PROJQUAY-8089 ) 1.14.4. Red Hat Quay v2 UI known issues The Red Hat Quay team is aware of the following known issues on the v2 UI: PROJQUAY-6910 . The new UI can't group and stack the chart on usage logs PROJQUAY-6909 . The new UI can't toggle the visibility of the chart on usage log PROJQUAY-6904 . "Permanently delete" tag should not be restored on new UI PROJQUAY-6899 . The normal user can not delete organization in new UI when enable FEATURE_SUPERUSERS_FULL_ACCESS PROJQUAY-6892 . The new UI should not invoke not required stripe and status page PROJQUAY-6884 . The new UI should show the tip of slack Webhook URL when creating slack notification PROJQUAY-6882 . The new UI global readonly super user can't see all organizations and image repos PROJQUAY-6881 . The new UI can't show all operation types in the logs chart PROJQUAY-6861 . The new UI "Last Modified" of organization always show N/A after target organization's setting is updated PROJQUAY-6860 . The new UI update the time machine configuration of organization show NULL in usage logs PROJQUAY-6859 . Thenew UI remove image repo permission show "undefined" for organization name in audit logs PROJQUAY-6852 . "Tag manifest with the branch or tag name" option in build trigger setup wizard should be checked by default. PROJQUAY-6832 . The new UI should validate the OIDC group name when enable OIDC Directory Sync PROJQUAY-6830 . The new UI should show the sync icon when the team is configured sync team members from OIDC Group PROJQUAY-6829 . The new UI team member added to team sync from OIDC group should be audited in Organization logs page PROJQUAY-6825 . Build cancel operation log can not be displayed correctly in new UI PROJQUAY-6812 . The new UI the "performer by" is NULL of build image in logs page PROJQUAY-6810 . The new UI should highlight the tag name with tag icon in logs page PROJQUAY-6808 . The new UI can't click the robot account to show credentials in logs page PROJQUAY-6807 . The new UI can't see the operations types in log page when quay is in dark mode PROJQUAY-6770 . The new UI build image by uploading Docker file should support .tar.gz or .zip PROJQUAY-6769 . The new UI should not display message "Trigger setup has already been completed" after build trigger setup completed PROJQUAY-6768 . The new UI can't navigate back to current image repo from image build PROJQUAY-6767 . The new UI can't download build logs PROJQUAY-6758 . The new UI should display correct operation number when hover over different operation type PROJQUAY-6757 . The new UI usage log should display the tag expiration time as date format 1.15. Red Hat Quay bug fixes The following issues were fixed with Red Hat Quay 3.13: PROJQUAY-5681 . Previously, when configuring an image repository with Events and Notifications to receive a Slack notification for Push to Repository and Package Vulnerability Found , no notification was returned of new critical image vulnerability found . This issue has been resolved. PROJQUAY-7244 . Previously, it was not possible to filter for repositories under specific organizations. This issue has been resolved, and you can now filter for repositories under specific organizations. PROJQUAY-7388 . Previously, when Red Hat Quay was configured with OIDC authentication using Microsoft Azure Entra ID and team sync was enabled, removing the team sync resulted in the usage logs chart displaying Undefined . This issue has been resolved. PROJQUAY-7430 . Some public container image registries, for example, Google Cloud Registry, generate longer passwords for the login. When this happens, Red Hat Quay could not mirror images from those registries because the password length exceeded the maximum allowed in the Red Hat Quay database. The actual length limit imposed by the encryption mechanism is lower than 9000 . This implies that while the database can hold up to 9000 characters, the effective limit during encryption is actually 6000 , and be calculated as follows: {Max Password Length} = {field\_max\_length} - {_RESERVED\_FIELD\_SPACE}. A password length of 6000 ensures compatibility with AWS ECR and most registries. PROJQUAY-7599 . Previously, attempting to delete a manifest using a tag name and the Red Hat Quay v2 API resulted in a 405 error code. This was because there was no delete_manifest_by_tagname operation in the API. This issue has been resolved. PROJQUAY-7606 . Users can now create a new team using the dashes ( - ) via the v2 UI. Previously, this could only be done using the API. PROJQUAY-7686 . Previously, the vulnerability page showed vertical scroll bars when provided URLs in the advisories were too big, which caused difficulties in reading information from the page. This issue has been resolved. PROJQUAY-7982 . There was a bug in the console service when using Quay.io for the first time. When attempting to create a user correlated with the console's user, clicking Confirm username refreshed the page and opened the same modal. This issue has been resolved. 1.16. Red Hat Quay feature tracker New features have been added to Red Hat Quay, some of which are currently in Technology Preview. Technology Preview features are experimental features and are not intended for production use. Some features available in releases have been deprecated or removed. Deprecated functionality is still included in Red Hat Quay, but is planned for removal in a future release and is not recommended for new deployments. For the most recent list of deprecated and removed functionality in Red Hat Quay, refer to Table 1.1. Additional details for more fine-grained functionality that has been deprecated and removed are listed after the table. Table 1.1. New features tracker Feature Quay 3.13 Quay 3.12 Quay 3.11 Keyless authentication with robot accounts General Availability - - Certificate-based authentication between Red Hat Quay and SQL General Availability - - Splunk HTTP Event Collector (HEC) support General Availability General Availability - Open Container Initiative 1.1 support General Availability General Availability - Reassigning an OAuth access token General Availability General Availability - Creating an image expiration notification General Availability General Availability - Team synchronization for Red Hat Quay OIDC deployments General Availability General Availability General Availability Configuring resources for managed components on OpenShift Container Platform General Availability General Availability General Availability Configuring AWS STS for Red Hat Quay , Configuring AWS STS for Red Hat Quay on OpenShift Container Platform General Availability General Availability General Availability Red Hat Quay repository auto-pruning General Availability General Availability General Availability FEATURE_UI_V2 Technology Preview Technology Preview Technology Preview 1.16.1. IBM Power, IBM Z, and IBM(R) LinuxONE support matrix Table 1.2. list of supported and unsupported features Feature IBM Power IBM Z and IBM(R) LinuxONE Allow team synchronization via OIDC on Azure Not Supported Not Supported Backing up and restoring on a standalone deployment Supported Supported Clair Disconnected Supported Supported Geo-Replication (Standalone) Supported Supported Geo-Replication (Operator) Supported Not Supported IPv6 Not Supported Not Supported Migrating a standalone to operator deployment Supported Supported Mirror registry Supported Supported PostgreSQL connection pooling via pgBouncer Supported Supported Quay config editor - mirror, OIDC Supported Supported Quay config editor - MAG, Kinesis, Keystone, GitHub Enterprise Not Supported Not Supported Quay config editor - Red Hat Quay V2 User Interface Supported Supported Quay Disconnected Supported Supported Repo Mirroring Supported Supported
[ "An error occurred (NoSuchUpload) when calling the CompleteMultipartUpload operation: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.", "spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size>", "DISABLE_PUSHES: true" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_release_notes/release-notes-313
Chapter 44. Kernel
Chapter 44. Kernel eBPF system call for tracing Red Hat Enterprise Linux 7.6 introduces the Extended Berkeley Packet Filter tool (eBPF) as a Technology Preview. This tool is enabled only for the tracing subsystem. For details, see the Red Hat Knowledgebase article at https://access.redhat.com/articles/3550581 . (BZ# 1559615 , BZ#1559756, BZ#1311586) Heterogeneous memory management included as a Technology Preview Red Hat Enterprise Linux 7.3 introduced the heterogeneous memory management (HMM) feature as a Technology Preview. This feature has been added to the kernel as a helper layer for devices that want to mirror a process address space into their own memory management unit (MMU). Thus a non-CPU device processor is able to read system memory using the unified system address space. To enable this feature, add experimental_hmm=enable to the kernel command line. (BZ#1230959) criu rebased to version 3.5 Red Hat Enterprise Linux 7.2 introduced the criu tool as a Technology Preview. This tool implements Checkpoint/Restore in User-space (CRIU) , which can be used to freeze a running application and store it as a collection of files. Later, the application can be restored from its frozen state. Note that the criu tool depends on Protocol Buffers , a language-neutral, platform-neutral extensible mechanism for serializing structured data. The protobuf and protobuf-c packages, which provide this dependency, were also introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. In Red Hat Enterprise Linux 7.6, the criu packages have been upgraded to upstream version 3.9, which provides a number of bug fixes and optimization for the runC container runtime. In addition, support for the 64-bit ARM architectures and the little-endian variant of IBM Power Systems CPU architectures has been fixed. (BZ# 1400230 , BZ#1464596) kexec as a Technology Preview The kexec system call has been provided as a Technology Preview. This system call enables loading and booting into another kernel from the currently running kernel, thus performing the function of the boot loader from within the kernel. Hardware initialization, which is normally done during a standard system boot, is not performed during a kexec boot, which significantly reduces the time required for a reboot. (BZ#1460849) kexec fast reboot as a Technology Preview The kexec fast reboot feature, which was introduced in Red Hat Enterprise Linux 7.5, continues to be available as a Technology Preview. kexec fast reboot makes the reboot significantly faster. To use this feature, you must load the kexec kernel manually, and then reboot the operating system. It is not possible to make kexec fast reboot as the default reboot action. Special case is using kexec fast reboot for Anaconda . It still does not enable to make kexec fast reboot default. However, when used with Anaconda , the operating system can automatically use kexec fast reboot after the installation is complete in case that user boots kernel with the anaconda option. To schedule a kexec reboot, use the inst.kexec command on the kernel command line, or include a reboot --kexec line in the Kickstart file. (BZ#1464377) perf cqm has been replaced by resctrl The Intel Cache Allocation Technology (CAT) was introduced in Red Hat Enterprise Linux 7.4 as a Technology Preview. However, the perf cqm tool did not work correctly due to an incompatibility between perf infrastructure and Cache Quality of Service Monitoring (CQM) hardware support. Consequently, multiple problems occurred when using perf cqm . These problems included most notably: perf cqm did not support the group of tasks which is allocated using resctrl perf cqm gave random and inaccurate data due to several problems with recycling perf cqm did not provide enough support when running different kinds of events together (the different events are, for example, tasks, system-wide, and cgroup events) perf cqm provided only partial support for cgroup events The partial support for cgroup events did not work in cases with a hierarchy of cgroup events, or when monitoring a task in a cgroup and the cgroup together Monitoring tasks for the lifetime caused perf overhead perf cqm reported the aggregate cache occupancy or memory bandwidth over all sockets, while in most cloud and VMM-bases use cases the individual per-socket usage is needed In Red Hat Enterprise Linux 7.5, perf cqm was replaced by the approach based on the resctrl file system, which addressed all of the aforementioned problems. (BZ# 1457533 , BZ#1288964) TC HW offloading available as a Technology Preview Starting with Red Hat Enterprise Linux 7.6, Traffic Control (TC) Hardware offloading has been provided as a Technology Preview. Hardware offloading enables that the selected functions of network traffic processing, such as shaping, scheduling, policing and dropping, are executed directly in the hardware instead of waiting for software processing, which improves the performance. (BZ#1503123) AMD xgbe network driver available as a Technology Preview Starting with Red Hat Enterprise Linux 7.6, the AMD xgbe network driver has been provided as a Technology Preview. (BZ#1589397)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/technology_previews_kernel
Chapter 19. PersistentClaimStorageOverride schema reference
Chapter 19. PersistentClaimStorageOverride schema reference Used in: PersistentClaimStorage Property Property type Description class string The storage class to use for dynamic volume allocation for this broker. broker integer Id of the kafka broker (broker identifier).
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-PersistentClaimStorageOverride-reference
3.2. Managing Users via the User Manager Application
3.2. Managing Users via the User Manager Application The User Manager application allows you to view, modify, add, and delete local users and groups in the graphical user interface. To start the User Manager application: From the toolbar, select System Administration Users and Groups . Or, type system-config-users at the shell prompt. Note Unless you have superuser privileges, the application will prompt you to authenticate as root . 3.2.1. Viewing Users In order to display the main window of the User Manager to view users, from the toolbar of User Manager select Edit Preferences . If you want to view all the users, that is including system users, clear the Hide system users and groups check box. The Users tab provides a list of local users along with additional information about their user ID, primary group, home directory, login shell, and full name. Figure 3.1. Viewing Users To find a specific user, type the first few letters of the name in the Search filter field and either press Enter , or click the Apply filter button. You can also sort the items according to any of the available columns by clicking the column header.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-users-configui
Chapter 25. User Authentication with Kerberos
Chapter 25. User Authentication with Kerberos User authentication using Active Directory (AD), also referred to as authentication through Kerberos, is supported through automation controller. 25.1. Set up the Kerberos packages First set up the Kerberos packages in automation controller so that you can successfully generate a Kerberos ticket. Use the following commands to install the packages: yum install krb5-workstation yum install krb5-devel yum install krb5-libs When installed, edit the /etc/krb5.conf file, as follows, to provide the address of the AD, the domain, and additional information: [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = WEBSITE.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] WEBSITE.COM = { kdc = WIN-SA2TXZOTVMV.website.com admin_server = WIN-SA2TXZOTVMV.website.com } [domain_realm] .website.com = WEBSITE.COM website.com = WEBSITE.COM When the configuration file has been updated, use the following commands to authenticate and get a valid token: [root@ip-172-31-26-180 ~]# kinit username Password for [email protected]: [root@ip-172-31-26-180 ~]# Check if you have a valid ticket. [root@ip-172-31-26-180 ~]# klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: [email protected] Valid starting Expires Service principal 01/25/23 11:42:56 01/25/23 21:42:53 krbtgt/[email protected] renew until 02/01/23 11:42:56 [root@ip-172-31-26-180 ~]# When you have a valid ticket, you can check to ensure that everything is working as expected from the command line. To test this, your inventory should resemble the following: [windows] win01.WEBSITE.COM [windows:vars] ansible_user = [email protected] ansible_connection = winrm ansible_port = 5986 You must also: Ensure that the hostname is the proper client hostname matching the entry in AD and is not the IP address. In the username declaration, ensure that the domain name (the text after @ ) is properly entered with regard to upper- and lower-case letters, as Kerberos is case sensitive. For automation controller, you must also ensure that the inventory looks the same. Note If you encounter a Server not found in Kerberos database error message, and your inventory is configured using FQDNs ( not IP addresses ), ensure that the service principal name is not missing or mis-configured. Playbooks should run as expected. You can test this by running the playbook as the awx user. When you have verified that playbooks work properly, you can integrate with automation controller. Generate the Kerberos ticket as the awx user. Automation controller automatically picks up the generated ticket for authentication. Note The python kerberos package must be installed. Ansible is designed to check if the kerberos package is installed and, if so, it uses kerberos authentication. 25.2. Active Directory and Kerberos Credentials Active Directory only: If you are only planning to run playbooks against Windows machines with AD usernames and passwords as machine credentials, you can use the "user@<domain>" format for the username. With Kerberos: If Kerberos is installed, you can create a machine credential with the username and password, using the "user@<domain>" format for the username. 25.3. Working with Kerberos Tickets Ansible defaults to automatically managing Kerberos tickets when both the username and password are specified in the machine credential for a host that is configured for Kerberos. A new ticket is created in a temporary credential cache for each host, before each task executes (to minimize the chance of ticket expiration). The temporary credential caches are deleted after each task, and do not interfere with the default credential cache. To disable automatic ticket management, that is, to use an existing SSO ticket or call kinit manually to populate the default credential cache, set ansible_winrm_kinit_mode=manual in the inventory. Automatic ticket management requires a standard kinit binary on the control host system path. To specify a different location or binary name, set the ansible_winrm_kinit_cmd inventory variable to the fully-qualified path to an MIT krbv5 kinit-compatible binary.
[ "install krb5-workstation install krb5-devel install krb5-libs", "[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = WEBSITE.COM dns_lookup_realm = false dns_lookup_kdc = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true [realms] WEBSITE.COM = { kdc = WIN-SA2TXZOTVMV.website.com admin_server = WIN-SA2TXZOTVMV.website.com } [domain_realm] .website.com = WEBSITE.COM website.com = WEBSITE.COM", "kinit username Password for [email protected]:", "klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: [email protected] Valid starting Expires Service principal 01/25/23 11:42:56 01/25/23 21:42:53 krbtgt/[email protected] renew until 02/01/23 11:42:56", "[windows] win01.WEBSITE.COM [windows:vars] ansible_user = [email protected] ansible_connection = winrm ansible_port = 5986" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/assembly-controller-kerberos-authentication
Chapter 9. Examples
Chapter 9. Examples Use the following examples to understand how to launch a compute instance post-deployment with various network configurations. 9.1. Example 1: Launching a Compute node with one NIC on the project and provider networks Use this example to understand how to launch a Compute node with the private project network and the provider network after you deploy the all-in-one Red Hat OpenStack Platform environment. This example is based on a single NIC configuration and requires at least three IP addresses. Prerequisites To complete this example successfully, you must have the following IP addresses available in your environment: One IP address for the OpenStack services. One IP address for the virtual router to provide connectivity to the project network. This IP address is assigned automatically in this example. At least one IP address for floating IPs on the provider network. Procedure Create configuration helper variables: Create a basic flavor: Download CirrOS and create an OpenStack image: Configure SSH: Create a simple network security group: Configure the new network security group: Enable SSH: Enable ping: Enable DNS: Create Neutron networks: Create a virtual router: Create a floating IP: Launch the instance: Assign the floating IP: Replace FLOATING_IP with the address of the floating IP that you create in a step. Test SSH: Replace FLOATING_IP with the address of the floating IP that you create in a step. Network Architecture 9.2. Example 2: Launching a Compute node with one NIC on the provider network Use this example to understand how to launch a Compute node with the provider network after you deploy the all-in-one Red Hat OpenStack Platform environment. This example is based on a single NIC configuration and requires at least four IP addresses. Prerequisites To complete this example successfully, you must have the following IP addresses available in your environment: One IP address for the OpenStack services. One IP address for the virtual router to provide connectivity to the project network. This IP address is assigned automatically in this example. One IP address for DHCP on the provider network. At least one IP address for floating IPs on the provider network. Procedure Create configuration helper variables: Create a basic flavor: Download CirrOS and create an OpenStack image: Configure SSH: Create a simple network security group: Configure the new network security group: Enable SSH: Enable ping: Enable DNS: Create Neutron networks: Create a virtual router: Launch the instance: Test SSH: Replace VM_IP with the address of the virtual machine that you create in the step. Network Architecture 9.3. Example 3: Launching a Compute node with two NICs on the project and provider networks Use this example to understand how to launch a Compute node with the private project network and the provider network after you deploy the all-in-one Red Hat OpenStack Platform environment. This example is based on a dual NIC configuration and requires at least three IP addresses on the provider network. Prerequisites One IP address for a gateway on the provider network. One IP address for OpenStack endpoints. One IP address for the virtual router to provide connectivity to the project network. This IP address is assigned automatically in this example. At least one IP address for floating IPs on the provider network. Procedure Create configuration helper variables: Create a basic flavor: Download CirrOS and create an OpenStack image: Configure SSH: Create a simple network security group: Configure the new network security group: Enable SSH: Enable ping: Enable DNS: Create Neutron networks: Create a virtual router: Create a floating IP: Launch the instance: Assign the floating IP: Replace FLOATING_IP with the address of the floating IP that you create in a step. Test SSH: Replace FLOATING_IP with the address of the floating IP that you create in a step. Network Architecture
[ "standalone with project networking and provider networking export OS_CLOUD=standalone export GATEWAY=192.168.24.1 export STANDALONE_HOST=192.168.24.2 export PUBLIC_NETWORK_CIDR=192.168.24.0/24 export PRIVATE_NETWORK_CIDR=192.168.100.0/24 export PUBLIC_NET_START=192.168.24.4 export PUBLIC_NET_END=192.168.24.5 export DNS_SERVER=1.1.1.1", "openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny", "wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img", "ssh-keygen openstack keypair create --public-key ~/.ssh/id_rsa.pub default", "openstack security group create basic", "openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0", "openstack security group rule create --protocol icmp basic", "openstack security group rule create --protocol udp --dst-port 53:53 basic", "openstack network create --external --provider-physical-network datacentre --provider-network-type flat public openstack network create --internal private openstack subnet create public-net --subnet-range USDPUBLIC_NETWORK_CIDR --no-dhcp --gateway USDGATEWAY --allocation-pool start=USDPUBLIC_NET_START,end=USDPUBLIC_NET_END --network public openstack subnet create private-net --subnet-range USDPRIVATE_NETWORK_CIDR --network private", "NOTE: In this case an IP will be automatically assigned from the allocation pool for the subnet. openstack router create vrouter openstack router set vrouter --external-gateway public openstack router add subnet vrouter private-net", "openstack floating ip create public", "openstack server create --flavor tiny --image cirros --key-name default --network private --security-group basic myserver", "openstack server add floating ip myserver <FLOATING_IP>", "ssh cirros@<FLOATING_IP>", "standalone with project networking and provider networking export OS_CLOUD=standalone export GATEWAY=192.168.24.1 export STANDALONE_HOST=192.168.24.2 export VROUTER_IP=192.168.24.3 export PUBLIC_NETWORK_CIDR=192.168.24.0/24 export PUBLIC_NET_START=192.168.24.4 export PUBLIC_NET_END=192.168.24.5 export DNS_SERVER=1.1.1.1", "openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny", "wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img", "ssh-keygen openstack keypair create --public-key ~/.ssh/id_rsa.pub default", "openstack security group create basic", "openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0", "openstack security group rule create --protocol icmp basic", "openstack security group rule create --protocol udp --dst-port 53:53 basic", "openstack network create --external --provider-physical-network datacentre --provider-network-type flat public openstack network create --internal private openstack subnet create public-net --subnet-range USDPUBLIC_NETWORK_CIDR --gateway USDGATEWAY --allocation-pool start=USDPUBLIC_NET_START,end=USDPUBLIC_NET_END --network public --host-route destination=169.254.169.254/32,gateway=USDVROUTER_IP --host-route destination=0.0.0.0/0,gateway=USDGATEWAY --dns-nameserver USDDNS_SERVER", "NOTE: In this case an IP will be automatically assigned from the allocation pool for the subnet. openstack router create vrouter openstack port create --network public --fixed-ip subnet=public-net,ip-address=USDVROUTER_IP vrouter-port openstack router add port vrouter vrouter-port", "openstack server create --flavor tiny --image cirros --key-name default --network public --security-group basic myserver", "ssh cirros@<VM_IP>", "standalone with project networking and provider networking export OS_CLOUD=standalone export GATEWAY=192.168.24.1 export STANDALONE_HOST=192.168.0.2 export PUBLIC_NETWORK_CIDR=192.168.24.0/24 export PRIVATE_NETWORK_CIDR=192.168.100.0/24 export PUBLIC_NET_START=192.168.0.3 export PUBLIC_NET_END=192.168.24.254 export DNS_SERVER=1.1.1.1", "openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny", "wget https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.4.0-x86_64-disk.img", "ssh-keygen openstack keypair create --public-key ~/.ssh/id_rsa.pub default", "openstack security group create basic", "openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0", "openstack security group rule create --protocol icmp basic", "openstack security group rule create --protocol udp --dst-port 53:53 basic", "openstack network create --external --provider-physical-network datacentre --provider-network-type flat public openstack network create --internal private openstack subnet create public-net --subnet-range USDPUBLIC_NETWORK_CIDR --no-dhcp --gateway USDGATEWAY --allocation-pool start=USDPUBLIC_NET_START,end=USDPUBLIC_NET_END --network public openstack subnet create private-net --subnet-range USDPRIVATE_NETWORK_CIDR --network private", "NOTE: In this case an IP will be automatically assigned from the allocation pool for the subnet. openstack router create vrouter openstack router set vrouter --external-gateway public openstack router add subnet vrouter private-net", "openstack floating ip create public", "openstack server create --flavor tiny --image cirros --key-name default --network private --security-group basic myserver", "openstack server add floating ip myserver <FLOATING_IP>", "ssh cirros@<FLOATING_IP>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/quick_start_guide/examples
Chapter 1. Introduction to Control Groups (Cgroups)
Chapter 1. Introduction to Control Groups (Cgroups) 1.1. What are Control Groups The control groups , abbreviated as cgroups in this guide, are a Linux kernel feature that allows you to allocate resources - such as CPU time, system memory, network bandwidth, or combinations of these resources - among hierarchically ordered groups of processes running on a system. By using cgroups, system administrators gain fine-grained control over allocating, prioritizing, denying, managing, and monitoring system resources. Hardware resources can be smartly divided up among applications and users, increasing overall efficiency. Control Groups provide a way to hierarchically group and label processes, and to apply resource limits to them. Traditionally, all processes received similar amounts of system resources that the administrator could modulate with the process niceness value. With this approach, applications that involved a large number of processes received more resources than applications with few processes, regardless of the relative importance of these applications. Red Hat Enterprise Linux 7 moves the resource management settings from the process level to the application level by binding the system of cgroup hierarchies with the systemd unit tree. Therefore, you can manage system resources with systemctl commands, or by modifying systemd unit files. See Chapter 2, Using Control Groups for details. In versions of Red Hat Enterprise Linux, system administrators built custom cgroup hierarchies with the use of the cgconfig command from the libcgroup package. This package is now deprecated, and it is not recommended to use it since it can easily create conflicts with the default cgroup hierarchy. However, libcgroup is still available to cover for certain specific cases, where systemd is not yet applicable, most notably for using the net-prio subsystem. See Chapter 3, Using libcgroup Tools . The aforementioned tools provide a high-level interface to interact with cgroup controllers (also known as subsystems) in Linux kernel. The main cgroup controllers for resource management are cpu , memory , and blkio , see Available Controllers in Red Hat Enterprise Linux 7 for the list of controllers enabled by default. For detailed description of resource controllers and their configurable parameters, see Controller-Specific Kernel Documentation .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/chap-introduction_to_control_groups
Chapter 15. Using Cruise Control for cluster rebalancing
Chapter 15. Using Cruise Control for cluster rebalancing Cruise Control is an open-source application designed to run alongside Kafka to help optimize use of cluster resources by doing the following: Monitoring cluster workload Rebalancing partitions based on predefined constraints Cruise Control operations help with running a more balanced Kafka cluster that uses brokers more efficiently. As Kafka clusters evolve, some brokers may become overloaded while others remain underutilized. Cruise Control addresses this imbalance by modeling resource utilization at the replica level- including, CPU, disk, network load- and generating optimization proposals (which you can approve or reject) for balanced partition assignments based on configurable optimization goals. The cruisecontrol.properties file contains the configuration for Cruise Control. You can specify and configure all the properties listed in the Configurations section of the Cruise Control Wiki. 15.1. Cruise Control components and features Cruise Control comprises four main components: Load Monitor Load Monitor collects the metrics and analyzes cluster workload data. Analyzer Analyzer generates optimization proposals based on collected data and configured goals. Anomaly Detector Anomaly Detector identifies and reports irregularities in cluster behavior. Executor Executor applies approved optimization proposals to the cluster. Cruise Control also provides a REST API for client interactions, which Streams for Apache Kafka uses to support these features: Generating optimization proposals from optimization goals Rebalancing a Kafka cluster based on an optimization proposal Changing topic replication factor Note Other Cruise Control features are not currently supported, including self healing, notifications, and write-your-own goals. 15.1.1. Optimization goals Optimization goals define objectives for rebalancing, such as distributing topic replicas evenly across brokers. They are categorized as follows: Supported goals are a list of goals supported by the Cruise Control instance that can be used in its operations. By default, this list includes all goals included with Cruise Control. For a goal to be used in other categories, such as default or hard goals, it must first be listed in supported goals. To prevent a goal's usage, remove it from this list. Hard goals are preset and must be satisfied for a proposal to succeed. Soft goals are preset goals with objectives that are prioritized during optimization as much as possible, without preventing a proposal from being created if all hard goals are satisfied. Default goals refer to the goals used by default when generating proposals. They match the supported goals unless specifically set by the user. Intra-broker goals refer to the goals used specifically for rebalances on the same broker. Proposal-specific goals are a subset of supported goals configured for specific proposals. Set proposal-specific goals at runtime. Specify other optimization goals in a configuration properties file using their fully-qualified domain names and in descending priority order. The config/cruisecontrol.properties file contains the configuration for Cruise Control. Use the following properties to manage goals: Supported goals: goals property Hard goals: hard.goals property Default goals: default.goals property Intra-broker goals: intra.broker.goals property 15.1.1.1. Supported goals Supported goals are predefined and available to use for generating Cruise Control optimization proposals. Goals not listed as supported goals cannot be used in Cruise Control operations. Some supported goals are preset as hard goals. Configure supported goals in cruisecontrol.properties : To modify supported goals, specify the goals in the goals property. You can adjust the priority order in the goals configuration. You must specify at least one supported goal. 15.1.1.2. Hard and soft goals Hard goals must be satisfied for optimization proposals to be generated. Soft goals are best-effort objectives that Cruise Control tries to meet after all hard goals are satisfied. The classification of hard and soft goals is fixed in Cruise Control code and cannot be changed. Cruise Control first prioritizes satisfying hard goals, and then addresses soft goals in the order they are listed. A proposal meeting all hard goals is valid, even if it violates some soft goals. For example, a soft goal might be to evenly distribute a topic's replicas. Cruise Control continues to generate an optimization proposal even if the soft goal isn't completely satisfied. Configure hard goals in cruisecontrol.properties : To modify hard goals, specify a subset of supported goals in the hard.goals property. You can adjust the priority order in the hard goals configuration. To exclude a hard goal, ensure it's not in either default.goals or hard.goals . Increasing the number of configured hard goals will reduce the likelihood of Cruise Control generating optimization proposals. 15.1.1.3. Default goals Cruise Control uses default goals to generate an optimization proposal. Default goals must be a subset of the supported optimization goals. The optimization proposal based on this supported goals list is then generated and cached. Configure default goals in cruisecontrol.properties : To modify default goals, specify a subset of supported goals in the default.goals property. You can adjust the priority order in the default goals configuration. You must specify at least one default goal. 15.1.1.4. Intra-broker goals Cruise Control uses intra-broker goals to balance data between disks on the same broker, which is useful for deployments with JBOD storage and multiple disks. Configure intra-broker goals in cruisecontrol.properties : To modify intra-broker goals, list the supported goals in the intra.broker.goals property. You can adjust the priority order in the intra-broker goals configuration. 15.1.1.5. Proposal-specific goals Proposal-specific optimization goals support the creation of optimization proposals based on a specific list of goals. If proposal-specific goals are not set, then default goals are used Specify proposal-specific goals at runtime as a subset of supported optimization goals for customization. For example, you can optimize topic leader replica distribution across the Kafka cluster without considering disk capacity or utilization by defining a single proposal-specific goal. When specifying proposal-specific goals, include all configured hard goals, or an error occurs. To ignore the configured hard goals in an optimization proposal, add the skip_hard_goals_check=true parameter to the request. 15.1.1.6. Goals order of priority Unless you change the configuration, Streams for Apache Kafka inherits goals from Cruise Control. The following list shows supported goals inherited by Streams for Apache Kafka from Cruise Control in descending priority order. Goals labeled as hard are mandatory constraints that must be satisfied for optimization proposals. RackAwareGoal (hard) MinTopicLeadersPerBrokerGoal (hard) ReplicaCapacityGoal (hard) DiskCapacityGoal (hard) NetworkInboundCapacityGoal (hard) NetworkOutboundCapacityGoal (hard) CpuCapacityGoal (hard) ReplicaDistributionGoal PotentialNwOutGoal DiskUsageDistributionGoal NetworkInboundUsageDistributionGoal NetworkOutboundUsageDistributionGoal CpuUsageDistributionGoal TopicReplicaDistributionGoal LeaderReplicaDistributionGoal LeaderBytesInDistributionGoal PreferredLeaderElectionGoal IntraBrokerDiskCapacityGoal (hard) IntraBrokerDiskUsageDistributionGoal Resource distribution goals are subject to capacity limits on broker resources. For more information on each optimization goal, see Goals in the Cruise Control Wiki. Note "Write your own" goals and Kafka assigner goals are not supported. Example configuration for default and hard goals default.goals=com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.CpuCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.PotentialNwOutGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskUsageDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundUsageDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundUsageDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.CpuUsageDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.TopicReplicaDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderReplicaDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderBytesInDistributionGoal hard.goals=com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.CpuCapacityGoal Important Ensure that the supported goals , default.goals , and (unless skip_hard_goals_check is set to true ) proposal-specific goals include all hard goals specified in hard.goals to avoid errors when generating optimization proposals. Hard goals must be included as a subset in the supported, default, and proposal-specific goals. Example request with proposal-specific goals curl -v -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true' 15.1.1.7. Skipping hard goal checks If skip_hard_goals_check=true is specified in a request, Cruise Control does not verify that the proposal-specific goals include all the configured hard goals. This allows for more flexibility in generating optimization proposals, but may lead to proposals that do not satisfy all hard goals. However, any hard goals included in the proposal-specific goals will still be treated as hard goals by Cruise Control, even with skip_hard_goals_check=true . 15.1.2. Optimization proposals Optimization proposals are summaries of proposed changes based on the defined optimization goals, assessed in a specific order of priority. You can approve or reject proposals and rerun them with adjusted goals if needed. With Cruise Control deployed for use in Streams for Apache Kafka, the process to generate and approve an optimization proposal is as follows: Make a request to generate an optimization proposal. This request triggers Cruise Control to initiate the optimization proposal generation process. A Cruise Control Metrics Reporter runs in every Kafka broker, collecting raw metrics and publishing them to a dedicated Kafka topic ( __CruiseControlMetrics ). Metrics for brokers, topics, and partitions are aggregated, sampled, and stored in other topics automatically created when Cruise Control is deployed . Load Monitor collects, processes, and stores the metrics as a workload model --including CPU, disk, and network utilization data- which is used by the Analyzer and Anomaly Detector. Anomaly Detector continuously monitors the health and performance of the Kafka cluster, checking for things like broker failures or disk capacity issues, that could impact cluster stability. Analyzer creates optimization proposals based on the workload model from the Load Monitor. Based on configured goals and capacities, it generates an optimization proposal for balancing partitions across brokers. Through the REST API, a summary of the proposal is returned in the response to the request. The optimization proposal is approved or rejected based on its alignment with cluster management goals. If approved, the Executor applies the optimization proposal to rebalance the Kafka cluster. This involves reassigning partitions and redistributing workload across brokers according to the approved proposal. Cruise Control optimization process Optimization proposals comprise a list of partition reassignment mappings. When you approve a proposal, the Cruise Control server applies these partition reassignments to the Kafka cluster. A partition reassignment consists of either of the following types of operations: Partition movement: Involves transferring the partition replica and its data to a new location. Partition movements can take one of two forms: Inter-broker movement: The partition replica is moved to a log directory on a different broker. Intra-broker movement: The partition replica is moved to a different log directory on the same broker. Leadership movement: Involves switching the leader of the partition's replicas. Cruise Control issues partition reassignments to the Kafka cluster in batches. The performance of the cluster during the rebalance is affected by the number and magnitude of each type of movement contained in each batch. 15.1.2.1. Rebalancing endpoints Proposals for rebalances can be generated by making a request to one of three endpoints. /rebalance endpoint A request to this endpoint runs a full rebalance by moving replicas across all the brokers in the cluster. /add_broker endpoint This endpoint is used after scaling up a Kafka cluster by adding one or more brokers. Normally, after scaling up a Kafka cluster, new brokers are used to host only the partitions of newly created topics. If no new topics are created, the newly added brokers are not used and the existing brokers remain under the same load. By using the add_broker endpoint immediately after adding brokers to the cluster, the rebalancing operation moves replicas from existing brokers to the newly added brokers. You specify the new brokers in the request as a list of broker IDs. /remove_broker This endpoint is used before scaling down a Kafka cluster by removing one or more brokers. The operation moves replicas off the brokers that are going to be removed. When these brokers are not hosting replicas anymore, you can safely run the scaling down operation. You specify the brokers you're removing as a list of broker IDs. In general, use the full rebalance endpoint to rebalance a Kafka cluster by spreading the load across brokers. Use the add_broker and remove_broker endpoints only if you want to scale your cluster up or down and rebalance the replicas accordingly. The procedure to run a rebalance is actually the same across the three different endpoints. The only difference is with specifying the endpoint in the request and, if needed, listing brokers that have been added or will be removed. 15.1.2.2. The results of an optimization proposal When an optimization proposal is generated, a summary of the changes is returned. The summary is returned in a response to a HTTP request through the Cruise Control API. The summary provides an overview of the proposed cluster rebalance and indicates the scale of the changes involved. The information provided is a summary of the full optimization proposal. 15.1.2.3. Approving or rejecting an optimization proposal An optimization proposal summary shows the proposed scope of changes. When you make a POST request to the /rebalance endpoint, an optimization proposal summary is returned in the response. Returning an optimization proposal summary curl -v -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/rebalance' Use the summary to decide whether to approve or reject an optimization proposal. Approving an optimization proposal You approve the optimization proposal by making a POST request to the /rebalance endpoint and setting the dryrun parameter to false (default true ). Cruise Control applies the proposal to the Kafka cluster and starts a cluster rebalance operation. Rejecting an optimization proposal If you choose not to approve an optimization proposal, you can change the optimization goals or update any of the rebalance performance tuning options , and then generate another proposal. You can resend a request without the dryrun parameter to generate a new optimization proposal. Use optimization proposals to assess the movements required for a rebalance. For example, a summary describes inter-broker and intra-broker movements. Inter-broker rebalancing moves data between separate brokers. Intra-broker rebalancing moves data between disks on the same broker when you are using a JBOD storage configuration. Such information can be useful even if you don't go ahead and approve the proposal. You might reject an optimization proposal, or delay its approval, because of the additional load on a Kafka cluster when rebalancing. If the proposal is delayed for too long, the cluster load may change significantly, so it may be better to request a new proposal. In the following example, the proposal suggests the rebalancing of data between separate brokers. The rebalance involves the movement of 55 partition replicas, totaling 12MB of data, across the brokers. The proposal will also move 24 partition leaders to different brokers. This requires a change to the cluster metadata, which has a low impact on performance. The balancedness scores are measurements of the overall balance of the Kafka cluster before and after the optimization proposal is approved. A balancedness score is based on optimization goals. If all goals are satisfied, the score is 100. The score is reduced for each goal that will not be met. Compare the balancedness scores to see whether the Kafka cluster is less balanced than it could be following a rebalance. Example optimization proposal summary Optimization has 55 inter-broker replica (12 MB) moves, 0 intra-broker replica (0 MB) moves and 24 leadership moves with a cluster model of 5 recent windows and 100.000% of the partitions covered. Excluded Topics: []. Excluded Brokers For Leadership: []. Excluded Brokers For Replica Move: []. Counts: 3 brokers 343 replicas 7 topics. On-demand Balancedness Score Before (78.012) After (82.912). Provision Status: RIGHT_SIZED. a4f833bd-2055-4213-bfdd-ad21f95bf184 Though the inter-broker movement of partition replicas has a high impact on performance, the total amount of data is not large. If the total data was much larger, you could reject the proposal, or time when to approve the rebalance to limit the impact on the performance of the Kafka cluster. The provision status indicates whether the current cluster configuration supports the optimization goals. Check the provision status to see if you should add or remove brokers. Table 15.1. Optimization proposal provision status Status Description RIGHT_SIZED The cluster has an appropriate number of brokers to satisfy the optimization goals. UNDER_PROVISIONED The cluster is under-provisioned and requires more brokers to satisfy the optimization goals. OVER_PROVISIONED The cluster is over-provisioned and requires fewer brokers to satisfy the optimization goals. UNDECIDED The status is not relevant or it has not yet been decided. 15.1.2.4. Optimization proposal summary properties The following table explains the properties contained in the optimization proposal's summary. Table 15.2. Properties contained in an optimization proposal summary Property Description <n> inter-broker replica (<y> MB) moves <n>: The number of partition replicas that will be moved between separate brokers. Performance impact during rebalance operation : Relatively high. <y> MB: The sum of the size of each partition replica that will be moved to a separate broker. Performance impact during rebalance operation : Variable. The larger the number of MBs, the longer the cluster rebalance will take to complete. <n> intra-broker replica (<y> MB) moves <n>: The total number of partition replicas that will be transferred between the disks of the cluster's brokers. Performance impact during rebalance operation : Relatively high, but less than inter-broker replica moves . <y> MB: The sum of the size of each partition replica that will be moved between disks on the same broker. Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. Moving a large amount of data between disks on the same broker has less impact than between separate brokers (see inter-broker replica moves ). <n> excluded topics The number of topics excluded from the calculation of partition replica/leader movements in the optimization proposal. You can exclude topics in one of the following ways: In the cruisecontrol.properties file, specify a regular expression in the topics.excluded.from.partition.movement property. In a POST request to the /rebalance endpoint, specify a regular expression in the excluded_topics parameter. Topics that match the regular expression are listed in the response and will be excluded from the cluster rebalance. <n> leadership moves <n>: The number of partitions whose leaders will be switched to different replicas. Performance impact during rebalance operation : Relatively low. <n> recent windows <n>: The number of metrics windows upon which the optimization proposal is based. <n>% of the partitions covered <n>%: The percentage of partitions in the Kafka cluster covered by the optimization proposal. On-demand Balancedness Score Before (<nn.yyy>) After (<nn.yyy>) Measurements of the overall balance of a Kafka Cluster. Cruise Control assigns a Balancedness Score to every optimization goal based on several factors, including priority (the goal's position in the list of default.goals or user-provided goals). The On-demand Balancedness Score is calculated by subtracting the sum of the Balancedness Score of each violated soft goal from 100. The Before score is based on the current configuration of the Kafka cluster. The After score is based on the generated optimization proposal. 15.1.2.5. Adjusting the cached proposal refresh rate Cruise Control maintains a cached optimization proposal based on the configured default optimization goals. This proposal is generated from the workload model and updated every 15 minutes to reflect the current state of the Kafka cluster. When you generate an optimization proposal using the default goals, Cruise Control returns the latest cached version. For clusters with rapidly changing workloads, you may want to shorten the refresh interval to ensure the optimization proposal reflects the most recent state. However, reducing the interval increases the load on the Cruise Control server. To adjust the refresh rate, modify the proposal.expiration.ms setting in the Cruise Control deployment configuration. Additional resources Cruise Control documentation 15.1.3. Tuning options for rebalances Configuration options allow you to fine-tune cluster rebalance performance. These settings control the movement of partition replicas and leadership, as well as the bandwidth allocated for rebalances. 15.1.3.1. Selecting replica movement strategies Cluster rebalance performance is also influenced by the replica movement strategy that is applied to the batches of partition reassignment commands. By default, Cruise Control uses the BaseReplicaMovementStrategy , which applies the reassignments in the order they were generated. However, this strategy could lead to the delay of other partition reassignments if large partition reassignments are generated then ordered first. Cruise Control provides four alternative replica movement strategies that can be applied to optimization proposals: PrioritizeSmallReplicaMovementStrategy : Reassign smaller partitions first. PrioritizeLargeReplicaMovementStrategy : Reassign larger partitions first. PostponeUrpReplicaMovementStrategy : Prioritize partitions without out-of-sync replicas. PrioritizeMinIsrWithOfflineReplicasStrategy : Prioritize reassignments for partitions at or below their minimum in-sync replicas (MinISR) with offline replicas. Set concurrency.adjuster.min.isr.check.enabled in the Cruise Control configuration to enable this strategy. These strategies can be configured as a sequence. The first strategy attempts to compare two partition reassignments using its internal logic. If the reassignments are equivalent, then it passes them to the strategy in the sequence to decide the order, and so on. 15.1.3.2. Rebalance tuning options You can set the following rebalance tuning options when configuring Cruise Control or individual rebalances: Set the tuning options using one of the following methods: Properties in the cruisecontrol.properties file arameters in POST requests to the /rebalance endpoint The relevant configurations for both methods are summarized in the following table. Table 15.3. Rebalance performance tuning configuration Cruise Control properties Rebalance endpoint parameters Default Description num.concurrent.partition.movements.per.broker concurrent_partition_movements_per_broker 5 The maximum number of inter-broker partition movements in each partition reassignment batch num.concurrent.intra.broker.partition.movements concurrent_intra_broker_partition_movements 2 The maximum number of intra-broker partition movements in each partition reassignment batch num.concurrent.leader.movements concurrent_leader_movements 1000 The maximum number of partition leadership changes in each partition reassignment batch default.replication.throttle replication_throttle Null (no limit) The bandwidth (in bytes per second) to assign to partition reassignment default.replica.movement.strategies replica_movement_strategies BaseReplicaMovementStrategy The list of strategies (in priority order) used to determine the order in which partition reassignment commands are executed for generated proposals. There are three strategies: PrioritizeSmallReplicaMovementStrategy , PrioritizeLargeReplicaMovementStrategy , and PostponeUrpReplicaMovementStrategy . For the server setting, use a comma-separated list with the fully qualified names of the strategy class (add com.linkedin.kafka.cruisecontrol.executor.strategy. to the start of each class name). For the rebalance parameters, use a comma-separated list of the class names of the replica movement strategies. Changing the default settings affects the length of time that the rebalance takes to complete, as well as the load placed on the Kafka cluster during the rebalance. Using lower values reduces the load but increases the amount of time taken, and vice versa. Additional resources Configurations in the Cruise Control Wiki REST APIs in the Cruise Control Wiki 15.2. Downloading Cruise Control A ZIP file distribution of Cruise Control is available for download from the Red Hat website. You can download the latest version of Red Hat Streams for Apache Kafka from the Streams for Apache Kafka software downloads page . Procedure Download the latest version of the Red Hat Streams for Apache Kafka Cruise Control archive from the Red Hat Customer Portal . Create the /opt/cruise-control directory: sudo mkdir /opt/cruise-control Extract the contents of the Cruise Control ZIP file to the new directory: unzip amq-streams-<version>-cruise-control-bin.zip -d /opt/cruise-control Change the ownership of the /opt/cruise-control directory to the Kafka user: sudo chown -R kafka:kafka /opt/cruise-control 15.3. Deploying the Cruise Control Metrics Reporter Before starting Cruise Control, you must configure the Kafka brokers to use the provided Cruise Control Metrics Reporter. The file for the Metrics Reporter is supplied with the Streams for Apache Kafka installation artifacts. When loaded at runtime, the Metrics Reporter sends metrics to the __CruiseControlMetrics topic, one of three auto-created topics . Cruise Control uses these metrics to create and update the workload model and to calculate optimization proposals. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. You are logged in to Red Hat Enterprise Linux as the Kafka user. Procedure For each broker in the Kafka cluster and one at a time: Stop the Kafka broker: ./bin/kafka-server-stop.sh Edit the Kafka configuration properties file to configure the Cruise Control Metrics Reporter. Add the CruiseControlMetricsReporter class to the metric.reporters configuration option. Do not remove any existing Metrics Reporters. Add the following configuration options and values: These options enable the Cruise Control Metrics Reporter to create the __CruiseControlMetrics topic with a log cleanup policy of DELETE . For more information, see Auto-created topics and Configuring logging and cleanup policy . Configure SSL, if required. In the Kafka configuration properties file, configure SSL between the Cruise Control Metrics Reporter and the Kafka broker by setting the relevant client configuration properties. The Metrics Reporter accepts all standard producer-specific configuration properties with the cruise.control.metrics.reporter prefix. For example: cruise.control.metrics.reporter.ssl.truststore.password . In the Cruise Control properties file ( ./cruise-control/config/cruisecontrol.properties ) configure SSL between the Kafka broker and the Cruise Control server by setting the relevant client configuration properties. Cruise Control inherits SSL client property options from Kafka and uses those properties for all Cruise Control server clients. Restart the Kafka broker: ./bin/kafka-server-start.sh -daemon ./config/server.properties For information on restarting brokers in a multi-node cluster, see Section 4.3, "Performing a graceful rolling restart of Kafka brokers" . Repeat steps 1-5 for the remaining brokers. 15.4. Configuring and starting Cruise Control Configure the properties used by Cruise Control and then start the Cruise Control server using the kafka-cruise-control-start.sh script. The server is hosted on a single machine for the whole Kafka cluster. Three topics are auto-created when Cruise Control starts. For more information, see Auto-created topics . Prerequisites You are logged in to Red Hat Enterprise Linux as the Kafka user. You have downloaded Cruise Control . You have deployed the Cruise Control Metrics Reporter . Procedure Edit the Cruise Control properties file ( ./cruise-control/config/cruisecontrol.properties ). Configure the properties shown in the following example configuration: # The Kafka cluster to control. bootstrap.servers=localhost:9092 1 # The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 # The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 # The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 # The list of supported goals goals={list of supported optimization goals} 5 # The list of supported hard goals hard.goals={List of hard goals} 6 # How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 # The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8 1 Host and port numbers of the Kafka broker (always port 9092). 2 Replication factor of the Kafka metric sample store topic. If you are evaluating Cruise Control in a single-node Kafka and ZooKeeper cluster, set this property to 1. For production use, set this property to 2 or more. 3 The configuration file that sets the maximum capacity limits for broker resources. Use the file that applies to your Kafka deployment configuration. For more information, see Capacity configuration . 4 Comma-separated list of default optimization goals, using fully-qualified domain names (FQDNs). A number of supported optimization goals (see 5) are already set as default optimization goals; you can add or remove goals if desired. 5 Comma-separated list of supported optimization goals, using FQDNs. To completely exclude goals from being used to generate optimization proposals, remove them from the list. 6 Comma-separated list of hard goals, using FQDNs. Seven of the supported optimization goals are already set as hard goals; you can add or remove goals if desired. 7 The interval, in milliseconds, for refreshing the cached optimization proposal that is generated from the default optimization goals. 8 Host and port numbers of the ZooKeeper connection (always port 2181). Start the Cruise Control server. The server starts on port 9092 by default; optionally, specify a different port. cd ./cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number> To verify that Cruise Control is running, send a GET request to the /state endpoint of the Cruise Control server: curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state' Auto-created topics The following table shows the three topics that are automatically created when Cruise Control starts. These topics are required for Cruise Control to work properly and must not be deleted or changed. Table 15.4. Auto-created topics Auto-created topic Created by Function __CruiseControlMetrics Cruise Control Metrics Reporter Stores the raw metrics from the Metrics Reporter in each Kafka broker. __KafkaCruiseControlPartitionMetricSamples Cruise Control Stores the derived metrics for each partition. These are created by the Metric Sample Aggregator . __KafkaCruiseControlModelTrainingSamples Cruise Control Stores the metrics samples used to create the Cluster Workload Model . To ensure that log compaction is disabled in the auto-created topics, make sure that you configure the Cruise Control Metrics Reporter as described in Section 15.3, "Deploying the Cruise Control Metrics Reporter" . Log compaction can remove records that are needed by Cruise Control and prevent it from working properly. 15.5. Configuring capacity limits Cruise Control uses capacity limits to determine if certain resource-based optimization goals are being broken. An attempted optimization fails if one or more of these resource-based goals is set as a hard goal and then broken. This prevents the optimization from being used to generate an optimization proposal. You specify capacity limits for Kafka broker resources in one of the following three .json files in cruise-control/config : capacityJBOD.json : For use in JBOD Kafka deployments (the default file). capacity.json : For use in non-JBOD Kafka deployments where each broker has the same number of CPU cores. capacityCores.json : For use in non-JBOD Kafka deployments where each broker has varying numbers of CPU cores. Set the file in the capacity.config.file property in cruisecontrol.properties . The selected file will be used for broker capacity resolution. For example: Capacity limits can be set for the following broker resources in the described units: DISK : Disk storage in MB CPU : CPU utilization as a percentage (0-100) or as a number of cores NW_IN : Inbound network throughput in KB per second NW_OUT : Outbound network throughput in KB per second To apply the same capacity limits to every broker monitored by Cruise Control, set capacity limits for broker ID -1 . To set different capacity limits for individual brokers, specify each broker ID and its capacity configuration. Example capacity limits configuration { "brokerCapacities":[ { "brokerId": "-1", "capacity": { "DISK": "100000", "CPU": "100", "NW_IN": "10000", "NW_OUT": "10000" }, "doc": "This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB." }, { "brokerId": "0", "capacity": { "DISK": "500000", "CPU": "100", "NW_IN": "50000", "NW_OUT": "50000" }, "doc": "This overrides the capacity for broker 0." } ] } For more information, see Populating the Capacity Configuration File in the Cruise Control Wiki. 15.6. Configuring logging and cleanup policy Cruise Control uses log4j1 for all server logging. To change the default configuration, edit the log4j.properties file in ./cruise-control/config/log4j.properties . You must restart the Cruise Control server before the changes take effect. It is important that the auto-created __CruiseControlMetrics topic (see auto-created topics ) has a log cleanup policy of DELETE rather than COMPACT . Otherwise, records that are needed by Cruise Control might be removed. As described in Section 15.3, "Deploying the Cruise Control Metrics Reporter" , setting the following options in the Kafka configuration file ensures that the COMPACT log cleanup policy is correctly set: cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1 If topic auto-creation is disabled in the Cruise Control Metrics Reporter ( cruise.control.metrics.topic.auto.create=false ), but enabled in the Kafka cluster, then the __CruiseControlMetrics topic is still automatically created by the broker. In this case, you must change the log cleanup policy of the __CruiseControlMetrics topic to DELETE using the kafka-configs.sh tool. Get the current configuration of the __CruiseControlMetrics topic: opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --describe Change the log cleanup policy in the topic configuration: ./bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete If topic auto-creation is disabled in both the Cruise Control Metrics Reporter and the Kafka cluster, you must create the __CruiseControlMetrics topic manually and then configure it to use the DELETE log cleanup policy using the kafka-configs.sh tool. For more information, see Section 9.9, "Modifying a topic configuration" . 15.7. Generating optimization proposals When you make a POST request to the /rebalance endpoint, Cruise Control generates an optimization proposal to rebalance the Kafka cluster based on the optimization goals provided. You can use the results of the optimization proposal to rebalance your Kafka cluster. You can run the optimization proposal using one of the following endpoints: /rebalance /add_broker /remove_broker The endpoint you use depends on whether you are rebalancing across all the brokers already running in the Kafka cluster; or you want to rebalance after adding brokers (scaling up) or before removing brokers (scaling down). The optimization proposal is generated as a dry run , unless the dryrun parameter is supplied and set to false . In "dry run mode", Cruise Control generates the optimization proposal and the estimated result, but doesn't initiate the proposal by rebalancing the cluster. You can analyze the information returned in the optimization proposal and decide whether to approve it. Use the following parameters to make requests to the endpoints: dryrun type: boolean, default: true Informs Cruise Control whether you want to generate an optimization proposal only ( true ), or generate an optimization proposal and perform a cluster rebalance ( false ). When dryrun=true (the default), you can also pass the verbose parameter to return more detailed information about the state of the Kafka cluster. This includes metrics for the load on each Kafka broker before and after the optimization proposal is applied, and the differences between the before and after values. excluded_topics type: regex A regular expression that matches the topics to exclude from the calculation of the optimization proposal. goals type: list of strings, default: the configured default.goals list List of user-provided optimization goals to use to prepare the optimization proposal. If goals are not supplied, the configured default.goals list in the cruisecontrol.properties file is used. skip_hard_goals_check type: boolean, default: false By default, Cruise Control checks that the user-provided optimization goals (in the goals parameter) contain all the configured hard goals (in hard.goals ). A request fails if you supply goals that are not a subset of the configured hard.goals . Set skip_hard_goals_check to true if you want to generate an optimization proposal with user-provided optimization goals that do not include all the configured hard.goals . json type: boolean, default: false Controls the type of response returned by the Cruise Control server. If not supplied, or set to false , then Cruise Control returns text formatted for display on the command line. If you want to extract elements of the returned information programmatically, set json=true . This will return JSON formatted text that can be piped to tools such as jq , or parsed in scripts and programs. verbose type: boolean, default: false Controls the level of detail in responses that are returned by the Cruise Control server. Can be used with dryrun=true . Note Other parameters are available. For more information, see REST APIs in the Cruise Control Wiki. Prerequisites Kafka is running. You have configured Cruise Control . (Optional for scaling up) You have installed new brokers on hosts to include in the rebalance. Procedure Generate an optimization proposal using a POST request to the /rebalance , /add_broker , or /remove_broker endpoint. Example request to /rebalance using default goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance' The cached optimization proposal is immediately returned. Note If NotEnoughValidWindows is returned, Cruise Control has not yet recorded enough metrics data to generate an optimization proposal. Wait a few minutes and then resend the request. Example request to /rebalance using specified goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal' If the request satisfies the supplied goals, the cached optimization proposal is immediately returned. Otherwise, a new optimization proposal is generated using the supplied goals; this takes longer to calculate. You can enforce this behavior by adding the ignore_proposal_cache=true parameter to the request. Example request to /rebalance using specified goals without hard goals curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true' Example request to /add_broker that includes specified brokers curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?brokerid=3,4' The request includes the IDs of the new brokers only. For example, this request adds brokers with the IDs 3 and 4 . Replicas are moved to the new brokers from existing brokers when rebalancing. Example request to /remove_broker that excludes specified brokers curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?brokerid=3,4' The request includes the IDs of the brokers being excluded only. For example, this request excludes brokers with the IDs 3 and 4 . Replicas are moved from the brokers being removed to other existing brokers when rebalancing. Note If a broker that is being removed has excluded topics, replicas are still moved. Review the optimization proposal contained in the response. The properties describe the pending cluster rebalance operation. The proposal contains a high level summary of the proposed optimization, followed by summaries for each default optimization goal, and the expected cluster state after the proposal has executed. Pay particular attention to the following information: The Cluster load after rebalance summary. If it meets your requirements, you should assess the impact of the proposed changes using the high level summary. n inter-broker replica (y MB) moves indicates how much data will be moved across the network between brokers. The higher the value, the greater the potential performance impact on the Kafka cluster during the rebalance. n intra-broker replica (y MB) moves indicates how much data will be moved within the brokers themselves (between disks). The higher the value, the greater the potential performance impact on individual brokers (although less than that of n inter-broker replica (y MB) moves ). The number of leadership moves. This has a negligible impact on the performance of the cluster during the rebalance. Asynchronous responses The Cruise Control REST API endpoints timeout after 10 seconds by default, although proposal generation continues on the server. A timeout might occur if the most recent cached optimization proposal is not ready, or if user-provided optimization goals were specified with ignore_proposal_cache=true . To allow you to retrieve the optimization proposal at a later time, take note of the request's unique identifier, which is given in the header of responses from the /rebalance endpoint. To obtain the response using curl , specify the verbose ( -v ) option: Here is an example header: * Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2023 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001) If an optimization proposal is not ready within the timeout, you can re-submit the POST request, this time including the User-Task-ID of the original request in the header: curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance' What to do Section 15.8, "Approving optimization proposals" 15.8. Approving optimization proposals If you are satisfied with your most recently generated optimization proposal, you can instruct Cruise Control to initiate a cluster rebalance and begin reassigning partitions. Leave as little time as possible between generating an optimization proposal and initiating the cluster rebalance. If some time has passed since you generated the original optimization proposal, the cluster state might have changed. Therefore, the cluster rebalance that is initiated might be different to the one you reviewed. If in doubt, first generate a new optimization proposal. Only one cluster rebalance, with a status of "Active", can be in progress at a time. Prerequisites You have generated an optimization proposal from Cruise Control. Procedure Send a POST request to the /rebalance , /add_broker , or /remove_broker endpoint with the dryrun=false parameter: If you used the /add_broker or /remove_broker endpoint to generate a proposal that included or excluded brokers, use the same endpoint to perform the rebalance with or without the specified brokers. Example request to /rebalance curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false' Example request to /add_broker curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?dryrun=false&brokerid=3,4' Example request to /remove_broker curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?dryrun=false&brokerid=3,4' Cruise Control initiates the cluster rebalance and returns the optimization proposal. Check the changes that are summarized in the optimization proposal. If the changes are not what you expect, you can stop the rebalance . Check the progress of the cluster rebalance using the /user_tasks endpoint. The cluster rebalance in progress has a status of "Active". To view all cluster rebalance tasks executed on the Cruise Control server: curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC To view the status of a particular cluster rebalance task, supply the user-task-ids parameter and the task ID: (Optional) Removing brokers when scaling down After a successful rebalance you can stop any brokers you excluded in order to scale down the Kafka cluster. Check that each broker being removed does not have any live partitions in its log ( log.dirs ). ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-deleteUSD' If a log directory does not match the regular expression \.[a-z0-9]-deleteUSD , active partitions are still present. If you have active partitions, check the rebalance has finished or the configuration for the optimization proposal. You can run the proposal again. Make sure that there are no active partitions before moving on to the step. Stop the broker. ./bin/kafka-server-stop.sh Confirm that the broker has stopped. jcmd | grep kafka 15.9. Stopping rebalances You can stop the cluster rebalance that is currently in progress. This instructs Cruise Control to finish the current batch of partition reassignments and then stop the rebalance. When the rebalance has stopped, completed partition reassignments have already been applied; therefore, the state of the Kafka cluster is different when compared to before the start of the rebalance operation. If further rebalancing is required, you should generate a new optimization proposal. Note The performance of the Kafka cluster in the intermediate (stopped) state might be worse than in the initial state. Prerequisites A cluster rebalance is in progress (indicated by a status of "Active"). Procedure Send a POST request to the /stop_proposal_execution endpoint: Additional resources Generating optimization proposals
[ "default.goals=com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.CpuCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.PotentialNwOutGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskUsageDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundUsageDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundUsageDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.CpuUsageDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.TopicReplicaDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderReplicaDistributionGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderBytesInDistributionGoal hard.goals=com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal,com.linkedin.kafka.cruisecontrol.analyzer.goals.CpuCapacityGoal", "curl -v -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true'", "curl -v -X POST 'http://<cc_host>:<cc_port>/kafkacruisecontrol/rebalance'", "Optimization has 55 inter-broker replica (12 MB) moves, 0 intra-broker replica (0 MB) moves and 24 leadership moves with a cluster model of 5 recent windows and 100.000% of the partitions covered. Excluded Topics: []. Excluded Brokers For Leadership: []. Excluded Brokers For Replica Move: []. Counts: 3 brokers 343 replicas 7 topics. On-demand Balancedness Score Before (78.012) After (82.912). Provision Status: RIGHT_SIZED. a4f833bd-2055-4213-bfdd-ad21f95bf184", "sudo mkdir /opt/cruise-control", "unzip amq-streams-<version>-cruise-control-bin.zip -d /opt/cruise-control", "sudo chown -R kafka:kafka /opt/cruise-control", "./bin/kafka-server-stop.sh", "metric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter", "cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1", "./bin/kafka-server-start.sh -daemon ./config/server.properties", "The Kafka cluster to control. bootstrap.servers=localhost:9092 1 The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 The list of supported goals goals={list of supported optimization goals} 5 The list of supported hard goals hard.goals={List of hard goals} 6 How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8", "cd ./cruise-control/ ./kafka-cruise-control-start.sh config/cruisecontrol.properties <port_number>", "curl -X GET 'http://<cc_host>:<cc_port>/kafkacruisecontrol/state'", "capacity.config.file=config/capacityJBOD.json", "{ \"brokerCapacities\":[ { \"brokerId\": \"-1\", \"capacity\": { \"DISK\": \"100000\", \"CPU\": \"100\", \"NW_IN\": \"10000\", \"NW_OUT\": \"10000\" }, \"doc\": \"This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB.\" }, { \"brokerId\": \"0\", \"capacity\": { \"DISK\": \"500000\", \"CPU\": \"100\", \"NW_IN\": \"50000\", \"NW_OUT\": \"50000\" }, \"doc\": \"This overrides the capacity for broker 0.\" } ] }", "opt/kafka/bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --describe", "./bin/kafka-configs.sh --bootstrap-server <broker_address> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?brokerid=3,4'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?brokerid=3,4'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "* Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2023 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001)", "curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/add_broker?dryrun=false&brokerid=3,4'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/remove_broker?dryrun=false&brokerid=3,4'", "curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC", "curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks?user_task_ids=c459316f-9eb5-482f-9d2d-97b5a4cd294d'", "ls -l <LogDir> | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'", "./bin/kafka-server-stop.sh", "jcmd | grep kafka", "curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/stop_proposal_execution'" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/cruise-control-concepts-str
7.9.3. Related Books
7.9.3. Related Books Network Printing by Matthew Gast and Todd Radermacher; O'Reilly & Associates, Inc. -- Comprehensive information on using Linux as a print server in heterogeneous environments. The System Administrators Guide ; Red Hat, Inc -- Includes a chapter on printer configuration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-printers-addres-books
Chapter 6. ironic-inspector
Chapter 6. ironic-inspector The following chapter contains information about the configuration options in the ironic-inspector service. 6.1. inspector.conf This section contains options for the /etc/ironic-inspector/inspector.conf file. 6.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/ironic-inspector/inspector.conf file. . Configuration option = Default value Type Description api_max_limit = 1000 integer value Limit the number of elements an API list-call returns auth_strategy = keystone string value Authentication method used on the ironic-inspector API. Either "noauth" or "keystone" are currently valid options. "noauth" will disable all authentication. can_manage_boot = True boolean value Whether the current installation of ironic-inspector can manage PXE booting of nodes. If set to False, the API will reject introspection requests with manage_boot missing or set to True. clean_up_period = 60 integer value Amount of time in seconds, after which repeat clean up of timed out nodes and old nodes status information. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['sqlalchemy=WARNING', 'iso8601=WARNING', 'requests=WARNING', 'urllib3.connectionpool=WARNING', 'keystonemiddleware=WARNING', 'keystoneauth=WARNING', 'ironicclient=WARNING', 'amqp=WARNING', 'amqplib=WARNING'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. enable_mdns = False boolean value Whether to enable publishing the ironic-inspector API endpoint via multicast DNS. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. host = <based on operating system> string value Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address. However, the node name must be valid within an AMQP key, and if using ZeroMQ, a valid hostname, FQDN, or IP address. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. introspection_delay = 5 integer value Delay (in seconds) between two introspections. ipmi_address_fields = ['ilo_address', 'drac_host', 'drac_address', 'cimc_address'] list value Ironic driver_info fields that are equivalent to ipmi_address. listen_address = 0.0.0.0 string value IP to listen on. listen_port = 5050 port value Port to listen on. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is setto "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_concurrency = 1000 integer value The green thread pool size. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rootwrap_config = /etc/ironic-inspector/rootwrap.conf string value Path to the rootwrap configuration file to use for running commands as root standalone = True boolean value Whether to run ironic-inspector as a standalone service. It's EXPERIMENTAL to set to False. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. timeout = 3600 integer value Timeout after which introspection is considered failed, set to 0 to disable. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_ssl = False boolean value SSL Enabled/Disabled use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 6.1.2. capabilities The following table outlines the options available under the [capabilities] group in the /etc/ironic-inspector/inspector.conf file. Table 6.1. capabilities Configuration option = Default value Type Description boot_mode = False boolean value Whether to store the boot mode (BIOS or UEFI). cpu_flags = {'aes': 'cpu_aes', 'pdpe1gb': 'cpu_hugepages_1g', 'pse': 'cpu_hugepages', 'smx': 'cpu_txt', 'svm': 'cpu_vt', 'vmx': 'cpu_vt'} dict value Mapping between a CPU flag and a capability to set if this flag is present. 6.1.3. coordination The following table outlines the options available under the [coordination] group in the /etc/ironic-inspector/inspector.conf file. Table 6.2. coordination Configuration option = Default value Type Description backend_url = memcached://localhost:11211 string value The backend URL to use for distributed coordination. EXPERIMENTAL. 6.1.4. cors The following table outlines the options available under the [cors] group in the /etc/ironic-inspector/inspector.conf file. Table 6.3. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-OpenStack-Ironic-Inspector-API-Minimum-Version', 'X-OpenStack-Ironic-Inspector-API-Maximum-Version', 'X-OpenStack-Ironic-Inspector-API-Version'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'POST', 'PUT', 'HEAD', 'PATCH', 'DELETE', 'OPTIONS'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = [] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 6.1.5. database The following table outlines the options available under the [database] group in the /etc/ironic-inspector/inspector.conf file. Table 6.4. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 6.1.6. discovery The following table outlines the options available under the [discovery] group in the /etc/ironic-inspector/inspector.conf file. Table 6.5. discovery Configuration option = Default value Type Description enabled_bmc_address_version = ['4', '6'] list value IP version of BMC address that will be used when enrolling a new node in Ironic. Defaults to "4,6". Could be "4" (use v4 address only), "4,6" (v4 address have higher priority and if both addresses found v6 version is ignored), "6,4" (v6 is desired but fall back to v4 address for BMCs having v4 address, opposite to "4,6"), "6" (use v6 address only and ignore v4 version). enroll_node_driver = fake-hardware string value The name of the Ironic driver used by the enroll hook when creating a new node in Ironic. 6.1.7. dnsmasq_pxe_filter The following table outlines the options available under the [dnsmasq_pxe_filter] group in the /etc/ironic-inspector/inspector.conf file. Table 6.6. dnsmasq_pxe_filter Configuration option = Default value Type Description dhcp_hostsdir = /var/lib/ironic-inspector/dhcp-hostsdir string value The MAC address cache directory, exposed to dnsmasq.This directory is expected to be in exclusive control of the driver. `dnsmasq_start_command = ` string value A (shell) command line to start the dnsmasq service upon filter initialization. Default: don't start. `dnsmasq_stop_command = ` string value A (shell) command line to stop the dnsmasq service upon inspector (error) exit. Default: don't stop. purge_dhcp_hostsdir = True boolean value Purge the hostsdir upon driver initialization. Setting to false should only be performed when the deployment of inspector is such that there are multiple processes executing inside of the same host and namespace. In this case, the Operator is responsible for setting up a custom cleaning facility. 6.1.8. iptables The following table outlines the options available under the [iptables] group in the /etc/ironic-inspector/inspector.conf file. Table 6.7. iptables Configuration option = Default value Type Description dnsmasq_interface = br-ctlplane string value Interface on which dnsmasq listens, the default is for VM's. ethoib_interfaces = [] list value List of Etherent Over InfiniBand interfaces on the Inspector host which are used for physical access to the DHCP network. Multiple interfaces would be attached to a bond or bridge specified in dnsmasq_interface. The MACs of the InfiniBand nodes which are not in desired state are going to be blacklisted based on the list of neighbor MACs on these interfaces. firewall_chain = ironic-inspector string value iptables chain name to use. ip_version = 4 string value The IP version that will be used for iptables filter. Defaults to 4. 6.1.9. ironic The following table outlines the options available under the [ironic] group in the /etc/ironic-inspector/inspector.conf file. Table 6.8. ironic Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. max_retries = 30 integer value Maximum number of retries in case of conflict error (HTTP 409). min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. retry_interval = 2 integer value Interval between retries in case of conflict error (HTTP 409). service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 6.1.10. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/ironic-inspector/inspector.conf file. Table 6.9. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = admin string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" or "admin"(default). keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = False boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only work under python 2.x. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 6.1.11. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/ironic-inspector/inspector.conf file. Table 6.10. oslo_policy Configuration option = Default value Type Description enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.json string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 6.1.12. pci_devices The following table outlines the options available under the [pci_devices] group in the /etc/ironic-inspector/inspector.conf file. Table 6.11. pci_devices Configuration option = Default value Type Description alias = [] multi valued An alias for PCI device identified by vendor_id and product_id fields. Format: {"vendor_id": "1234", "product_id": "5678", "name": "pci_dev1"} 6.1.13. processing The following table outlines the options available under the [processing] group in the /etc/ironic-inspector/inspector.conf file. Table 6.12. processing Configuration option = Default value Type Description add_ports = pxe string value Which MAC addresses to add as ports during introspection. Possible values: all (all MAC addresses), active (MAC addresses of NIC with IP addresses), pxe (only MAC address of NIC node PXE booted from, falls back to "active" if PXE MAC is not supplied by the ramdisk). always_store_ramdisk_logs = False boolean value Whether to store ramdisk logs even if it did not return an error message (dependent upon "ramdisk_logs_dir" option being set). default_processing_hooks = ramdisk_error,root_disk_selection,scheduler,validate_interfaces,capabilities,pci_devices string value Comma-separated list of default hooks for processing pipeline. Hook scheduler updates the node with the minimum properties required by the Nova scheduler. Hook validate_interfaces ensures that valid NIC data was provided by the ramdisk. Do not exclude these two unless you really know what you're doing. disk_partitioning_spacing = True boolean value Whether to leave 1 GiB of disk size untouched for partitioning. Only has effect when used with the IPA as a ramdisk, for older ramdisk local_gb is calculated on the ramdisk side. keep_ports = all string value Which ports (already present on a node) to keep after introspection. Possible values: all (do not delete anything), present (keep ports which MACs were present in introspection data), added (keep only MACs that we added during introspection). node_not_found_hook = None string value The name of the hook to run when inspector receives inspection information from a node it isn't already aware of. This hook is ignored by default. overwrite_existing = True boolean value Whether to overwrite existing values in node database. Disable this option to make introspection a non-destructive operation. permit_active_introspection = False boolean value Whether to process nodes that are in running states. power_off = True boolean value Whether to power off a node after introspection.Nodes in active or rescue states which submit introspection data will be left on if the feature is enabled via the permit_active_introspection configuration option. processing_hooks = USDdefault_processing_hooks string value Comma-separated list of enabled hooks for processing pipeline. The default for this is USDdefault_processing_hooks, hooks can be added before or after the defaults like this: "prehook,USDdefault_processing_hooks,posthook". ramdisk_logs_dir = None string value If set, logs from ramdisk will be stored in this directory. ramdisk_logs_filename_format = {uuid}_{dt:%Y%m%d-%H%M%S.%f}.tar.gz string value File name template for storing ramdisk logs. The following replacements can be used: {uuid} - node UUID or "unknown", {bmc} - node BMC address or "unknown", {dt} - current UTC date and time, {mac} - PXE booting MAC or "unknown". store_data = none string value The storage backend for storing introspection data. Possible values are: none , database and swift . If set to none , introspection data will not be stored. 6.1.14. pxe_filter The following table outlines the options available under the [pxe_filter] group in the /etc/ironic-inspector/inspector.conf file. Table 6.13. pxe_filter Configuration option = Default value Type Description driver = iptables string value PXE boot filter driver to use, possible filters are: "iptables", "dnsmasq" and "noop". Set "noop " to disable the firewall filtering. sync_period = 15 integer value Amount of time in seconds, after which repeat periodic update of the filter. 6.1.15. service_catalog The following table outlines the options available under the [service_catalog] group in the /etc/ironic-inspector/inspector.conf file. Table 6.14. service_catalog Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = baremetal-introspection string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version 6.1.16. swift The following table outlines the options available under the [swift] group in the /etc/ironic-inspector/inspector.conf file. Table 6.15. swift Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. connect-retries = None integer value The maximum number of retries that should be attempted for connection errors. connect-retry-delay = None floating point value Delay (in seconds) between two retries for connection errors. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. container = ironic-inspector string value Default Swift container to use when creating objects. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. delete_after = 0 integer value Number of seconds that the Swift object will last before being deleted. (set to 0 to never delete the object). domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint-override = None string value Always use this endpoint URL for requests for this client. NOTE: The unversioned endpoint should be specified here; to request a particular API version, use the version , min-version , and/or max-version options. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file max-version = None string value The maximum major version of a given API, intended to be used as the upper bound of a range with min_version. Mutually exclusive with version. max_retries = None integer value This option is deprecated and has no effect. min-version = None string value The minimum major version of a given API, intended to be used as the lower bound of a range with max_version. Mutually exclusive with version. If min_version is given with no max_version it is as if max version is "latest". password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region-name = None string value The default region_name for endpoint URL discovery. service-name = None string value The default service_name for endpoint URL discovery. service-type = object-store string value The default service_type for endpoint URL discovery. split-loggers = False boolean value Log requests to multiple loggers. status-code-retries = None integer value The maximum number of retries that should be attempted for retriable HTTP status codes. status-code-retry-delay = None floating point value Delay (in seconds) between two retries for retriable status codes. If not set, exponential retry starting with 0.5 seconds up to a maximum of 60 seconds is used. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value Trust ID user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username valid-interfaces = ['internal', 'public'] list value List of interfaces, in order of preference, for endpoint URL. version = None string value Minimum Major API version within a given Major API version for endpoint URL discovery. Mutually exclusive with min_version and max_version
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/configuration_reference/ironic_inspector
1.4. SELinux States and Modes
1.4. SELinux States and Modes SELinux can run in one of three modes: disabled, permissive, or enforcing. Disabled mode is strongly discouraged; not only does the system avoid enforcing the SELinux policy, it also avoids labeling any persistent objects such as files, making it difficult to enable SELinux in the future. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not recommended for production systems, permissive mode can be helpful for SELinux policy development. Enforcing mode is the default, and recommended, mode of operation; in enforcing mode SELinux operates normally, enforcing the loaded security policy on the entire system. Use the setenforce utility to change between enforcing and permissive mode. Changes made with setenforce do not persist across reboots. To change to enforcing mode, enter the setenforce 1 command as the Linux root user. To change to permissive mode, enter the setenforce 0 command. Use the getenforce utility to view the current SELinux mode: In Red Hat Enterprise Linux, you can set individual domains to permissive mode while the system runs in enforcing mode. For example, to make the httpd_t domain permissive: See Section 11.3.4, "Permissive Domains" for more information. Note Persistent states and modes changes are covered in Section 4.4, "Permanent Changes in SELinux States and Modes" .
[ "~]# getenforce Enforcing", "~]# setenforce 0 ~]# getenforce Permissive", "~]# setenforce 1 ~]# getenforce Enforcing", "~]# semanage permissive -a httpd_t" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-introduction-selinux_modes
Chapter 6. Backing up IdM servers using Ansible playbooks
Chapter 6. Backing up IdM servers using Ansible playbooks Using the ipabackup Ansible role, you can automate backing up an IdM server and transferring backup files between servers and your Ansible controller. 6.1. Preparing your Ansible control node for managing IdM As a system administrator managing Identity Management (IdM), when working with Red Hat Ansible Engine, it is good practice to do the following: Create a subdirectory dedicated to Ansible playbooks in your home directory, for example ~/MyPlaybooks . Copy and adapt sample Ansible playbooks from the /usr/share/doc/ansible-freeipa/* and /usr/share/doc/rhel-system-roles/* directories and subdirectories into your ~/MyPlaybooks directory. Include your inventory file in your ~/MyPlaybooks directory. By following this practice, you can find all your playbooks in one place and you can run your playbooks without invoking root privileges. Note You only need root privileges on the managed nodes to execute the ipaserver , ipareplica , ipaclient , ipabackup , ipasmartcard_server and ipasmartcard_client ansible-freeipa roles. These roles require privileged access to directories and the dnf software package manager. Follow this procedure to create the ~/MyPlaybooks directory and configure it so that you can use it to store and run Ansible playbooks. Prerequisites You have installed an IdM server on your managed nodes, server.idm.example.com and replica.idm.example.com . You have configured DNS and networking so you can log in to the managed nodes, server.idm.example.com and replica.idm.example.com , directly from the control node. You know the IdM admin password. Procedure Create a directory for your Ansible configuration and playbooks in your home directory: Change into the ~/MyPlaybooks/ directory: Create the ~/MyPlaybooks/ansible.cfg file with the following content: Create the ~/MyPlaybooks/inventory file with the following content: This configuration defines two host groups, eu and us , for hosts in these locations. Additionally, this configuration defines the ipaserver host group, which contains all hosts from the eu and us groups. Optional: Create an SSH public and private key. To simplify access in your test environment, do not set a password on the private key: Copy the SSH public key to the IdM admin account on each managed node: You must enter the IdM admin password when you enter these commands. Additional resources Installing an Identity Management server using an Ansible playbook How to build your inventory 6.2. Using Ansible to create a backup of an IdM server You can use the ipabackup role in an Ansible playbook to create a backup of an IdM server and store it on the IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the backup-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the backup-my-server.yml Ansible playbook file for editing. Adapt the file by setting the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group: Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Verification Log into the IdM server that you have backed up. Verify that the backup is in the /var/lib/ipa/backup directory. Additional resources For more sample Ansible playbooks that use the ipabackup role, see: The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.3. Using Ansible to create a backup of an IdM server on your Ansible controller You can use the ipabackup role in an Ansible playbook to create a backup of an IdM server and automatically transfer it on your Ansible controller. Your backup file name begins with the host name of the IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure To store the backups, create a subdirectory in your home directory on the Ansible controller. Navigate to the ~/MyPlaybooks/ directory: Make a copy of the backup-server-to-controller.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the backup-my-server-to-my-controller.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Optional: To maintain a copy of the backup on the IdM server, uncomment the following line: By default, backups are stored in the present working directory of the Ansible controller. To specify the backup directory you created in Step 1, add the ipabackup_controller_path variable and set it to the /home/user/ipabackups directory. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Verification Verify that the backup is in the /home/user/ipabackups directory of your Ansible controller: Additional resources For more sample Ansible playbooks that use the ipabackup role, see: The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.4. Using Ansible to copy a backup of an IdM server to your Ansible controller You can use an Ansible playbook to copy a backup of an IdM server from the IdM server to your Ansible controller. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure To store the backups, create a subdirectory in your home directory on the Ansible controller. Navigate to the ~/MyPlaybooks/ directory: Make a copy of the copy-backup-from-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the copy-my-backup-from-my-server-to-my-controller.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup on your IdM server to copy to your Ansible controller. By default, backups are stored in the present working directory of the Ansible controller. To specify the directory you created in Step 1, add the ipabackup_controller_path variable and set it to the /home/user/ipabackups directory. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Note To copy all IdM backups to your controller, set the ipabackup_name variable in the Ansible playbook to all : For an example, see the copy-all-backups-from-server.yml Ansible playbook in the /usr/share/doc/ansible-freeipa/playbooks directory. Verification Verify your backup is in the /home/user/ipabackups directory on your Ansible controller: Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.5. Using Ansible to copy a backup of an IdM server from your Ansible controller to the IdM server You can use an Ansible playbook to copy a backup of an IdM server from your Ansible controller to the IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the copy-backup-from-controller.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the copy-my-backup-from-my-controller-to-my-server.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup on your Ansible controller to copy to the IdM server. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory. 6.6. Using Ansible to remove a backup from an IdM server You can use an Ansible playbook to remove a backup from an IdM server. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.15 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/MyPlaybooks/ directory: Make a copy of the remove-backup-from-server.yml file located in the /usr/share/doc/ansible-freeipa/playbooks directory: Open the remove-backup-from-my-server.yml file for editing. Adapt the file by setting the following variables: Set the hosts variable to a host group from your inventory file. In this example, set it to the ipaserver host group. Set the ipabackup_name variable to the name of the ipabackup to remove from your IdM server. Save the file. Run the Ansible playbook, specifying the inventory file and the playbook file: Note To remove all IdM backups from the IdM server, set the ipabackup_name variable in the Ansible playbook to all : For an example, see the remove-all-backups-from-server.yml Ansible playbook in the /usr/share/doc/ansible-freeipa/playbooks directory. Additional resources The README.md file in the /usr/share/doc/ansible-freeipa/roles/ipabackup directory. The /usr/share/doc/ansible-freeipa/playbooks/ directory.
[ "mkdir ~/MyPlaybooks/", "cd ~/MyPlaybooks", "[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True", "[ipaserver] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password=SomeADMINpassword [ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword", "ssh-keygen", "ssh-copy-id [email protected] ssh-copy-id [email protected]", "cd ~/MyPlaybooks/", "cp /usr/share/doc/ansible-freeipa/playbooks/backup-server.yml backup-my-server.yml", "--- - name: Playbook to backup IPA server hosts: ipaserver become: true roles: - role: ipabackup state: present", "ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory backup-my-server.yml", "ls /var/lib/ipa/backup/ ipa-full-2021-04-30-13-12-00", "mkdir ~/ipabackups", "cd ~/MyPlaybooks/", "cp /usr/share/doc/ansible-freeipa/playbooks/backup-server-to-controller.yml backup-my-server-to-my-controller.yml", "ipabackup_keep_on_server: true", "--- - name: Playbook to backup IPA server to controller hosts: ipaserver become: true vars: ipabackup_to_controller: true # ipabackup_keep_on_server: true ipabackup_controller_path: /home/user/ipabackups roles: - role: ipabackup state: present", "ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory backup-my-server-to-my-controller.yml", "[user@controller ~]USD ls /home/user/ipabackups server.idm.example.com_ipa-full-2021-04-30-13-12-00", "mkdir ~/ipabackups", "cd ~/MyPlaybooks/", "cp /usr/share/doc/ansible-freeipa/playbooks/copy-backup-from-server.yml copy-backup-from-my-server-to-my-controller.yml", "--- - name: Playbook to copy backup from IPA server hosts: ipaserver become: true vars: ipabackup_name: ipa-full-2021-04-30-13-12-00 ipabackup_to_controller: true ipabackup_controller_path: /home/user/ipabackups roles: - role: ipabackup state: present", "ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory copy-backup-from-my-server-to-my-controller.yml", "vars: ipabackup_name: all ipabackup_to_controller: true", "[user@controller ~]USD ls /home/user/ipabackups server.idm.example.com_ipa-full-2021-04-30-13-12-00", "cd ~/MyPlaybooks/", "cp /usr/share/doc/ansible-freeipa/playbooks/copy-backup-from-controller.yml copy-backup-from-my-controller-to-my-server.yml", "--- - name: Playbook to copy a backup from controller to the IPA server hosts: ipaserver become: true vars: ipabackup_name: server.idm.example.com_ipa-full-2021-04-30-13-12-00 ipabackup_from_controller: true roles: - role: ipabackup state: copied", "ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory copy-backup-from-my-controller-to-my-server.yml", "cd ~/MyPlaybooks/", "cp /usr/share/doc/ansible-freeipa/playbooks/remove-backup-from-server.yml remove-backup-from-my-server.yml", "--- - name: Playbook to remove backup from IPA server hosts: ipaserver become: true vars: ipabackup_name: ipa-full-2021-04-30-13-12-00 roles: - role: ipabackup state: absent", "ansible-playbook --vault-password-file=password_file -v -i ~/MyPlaybooks/inventory remove-backup-from-my-server.yml", "vars: ipabackup_name: all" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/preparing_for_disaster_recovery_with_identity_management/assembly_backing-up-idm-servers-using-ansible-playbooks_preparing-for-disaster-recovery
Chapter 6. Service Provider Interfaces (SPI)
Chapter 6. Service Provider Interfaces (SPI) Red Hat build of Keycloak is designed to cover most use-cases without requiring custom code, but we also want it to be customizable. To achieve this Red Hat build of Keycloak has a number of Service Provider Interfaces (SPI) for which you can implement your own providers. 6.1. Implementing an SPI To implement an SPI you need to implement its ProviderFactory and Provider interfaces. You also need to create a service configuration file. For example, to implement the Theme Selector SPI you need to implement ThemeSelectorProviderFactory and ThemeSelectorProvider and also provide the file META-INF/services/org.keycloak.theme.ThemeSelectorProviderFactory . Example ThemeSelectorProviderFactory: package org.acme.provider; import ... public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory { @Override public ThemeSelectorProvider create(KeycloakSession session) { return new MyThemeSelectorProvider(session); } @Override public void init(Config.Scope config) { } @Override public void postInit(KeycloakSessionFactory factory) { } @Override public void close() { } @Override public String getId() { return "myThemeSelector"; } } It is recommended that your provider factory implementation returns unique id by method getId() . However there can be some exceptions to this rule as mentioned below in the Overriding providers section. Note Red Hat build of Keycloak creates a single instance of provider factories which makes it possible to store state for multiple requests. Provider instances are created by calling create on the factory for each request so these should be light-weight object. Example ThemeSelectorProvider: package org.acme.provider; import ... public class MyThemeSelectorProvider implements ThemeSelectorProvider { public MyThemeSelectorProvider(KeycloakSession session) { } @Override public String getThemeName(Theme.Type type) { return "my-theme"; } @Override public void close() { } } Example service configuration file ( META-INF/services/org.keycloak.theme.ThemeSelectorProviderFactory ): To configure your provider, see the Configuring Providers chapter. For example, to configure a provider you can set options as follows: bin/kc.[sh|bat] --spi-theme-selector-my-theme-selector-enabled=true --spi-theme-selector-my-theme-selector-theme=my-theme Then you can retrieve the config in the ProviderFactory init method: public void init(Config.Scope config) { String themeName = config.get("theme"); } Your provider can also look up other providers if needed. For example: public class MyThemeSelectorProvider implements ThemeSelectorProvider { private KeycloakSession session; public MyThemeSelectorProvider(KeycloakSession session) { this.session = session; } @Override public String getThemeName(Theme.Type type) { return session.getContext().getRealm().getLoginTheme(); } } 6.1.1. Override built-in providers As mentioned above, it is recommended that your ProviderFactory implementations use unique ID. However at the same time, it can be useful to override one of the Red Hat build of Keycloak built-in providers. The recommended way for this is still ProviderFactory implementation with unique ID and then for instance set the default provider as specified in the Configuring Providers chapter. On the other hand, this may not be always possible. For instance when you need some customizations to default OpenID Connect protocol behaviour and you want to override default Red Hat build of Keycloak implementation of OIDCLoginProtocolFactory you need to preserve same providerId. As for example admin console, OIDC protocol well-known endpoint and various other things rely on the ID of the protocol factory being openid-connect . For this case, it is highly recommended to implement method order() of your custom implementation and make sure that it has higher order than the built-in implementation. public class CustomOIDCLoginProtocolFactory extends OIDCLoginProtocolFactory { // Some customizations here @Override public int order() { return 1; } } In case of multiple implementations with same provider ID, only the one with highest order will be used by Red Hat build of Keycloak runtime. 6.1.2. Show info from your SPI implementation in the Admin Console Sometimes it is useful to show additional info about your Provider to a Red Hat build of Keycloak administrator. You can show provider build time information (for example, version of custom provider currently installed), current configuration of the provider (e.g. url of remote system your provider talks to) or some operational info (average time of response from remote system your provider talks to). Red Hat build of Keycloak Admin Console provides Server Info page to show this kind of information. To show info from your provider it is enough to implement org.keycloak.provider.ServerInfoAwareProviderFactory interface in your ProviderFactory . Example implementation for MyThemeSelectorProviderFactory from example: package org.acme.provider; import ... public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory, ServerInfoAwareProviderFactory { ... @Override public Map<String, String> getOperationalInfo() { Map<String, String> ret = new LinkedHashMap<>(); ret.put("theme-name", "my-theme"); return ret; } } 6.2. Use available providers In your provider implementation, you can use other providers available in Red Hat build of Keycloak. The existing providers can be typically retrieved with the usage of the KeycloakSession , which is available to your provider as described in the section Implementing an SPI . Red Hat build of Keycloak has two provider types: Single-implementation provider types - There can be only a single active implementation of the particular provider type in Red Hat build of Keycloak runtime. For example HostnameProvider specifies the hostname to be used by Red Hat build of Keycloak and that is shared for the whole Red Hat build of Keycloak server. Hence there can be only single implementation of this provider active for the Red Hat build of Keycloak server. If there are multiple provider implementations available to the server runtime, one of them needs to be specified as the default one. For example such as: bin/kc.[sh|bat] build --spi-hostname-provider=default The value default used as the value of default-provider must match the ID returned by the ProviderFactory.getId() of the particular provider factory implementation. In the code, you can obtain the provider such as keycloakSession.getProvider(HostnameProvider.class) Multiple implementation provider types - Those are provider types, that allow multiple implementations available and working together in the Red Hat build of Keycloak runtime. For example EventListener provider allows to have multiple implementations available and registered, which means that particular event can be sent to all the listeners (jboss-logging, sysout etc). In the code, you can obtain a specified instance of the provider for example such as session.getProvider(EventListener.class, "jboss-logging") . You need to specify provider_id of the provider as the second argument as there can be multiple instances of this provider type as described above. The provider ID must match the ID returned by the ProviderFactory.getId() of the particular provider factory implementation. Some provider types can be retrieved with the usage of ComponentModel as the second argument and some (for example Authenticator ) even need to be retrieved with the usage of KeycloakSessionFactory . It is not recommended to implement your own providers this way as it may be deprecated in the future. 6.3. Registering provider implementations Providers are registered with the server by simply copying them to the providers directory. If your provider needs additional dependencies not already provided by Keycloak copy these to the providers directory. After registering new providers or dependencies Keycloak needs to be re-built with the kc.[sh|bat] build command. 6.3.1. Disabling a provider You can disable a provider by setting the enabled attribute for the provider to false. For example to disable the Infinispan user cache provider use: bin/kc.[sh|bat] build --spi-user-cache-infinispan-enabled=false 6.4. JavaScript providers Red Hat build of Keycloak has the ability to execute scripts during runtime in order to allow administrators to customize specific functionalities: Authenticator JavaScript Policy OpenID Connect Protocol Mapper SAML Protocol Mapper 6.4.1. Authenticator Authentication scripts must provide at least one of the following functions: authenticate(..) , which is called from Authenticator#authenticate(AuthenticationFlowContext) action(..) , which is called from Authenticator#action(AuthenticationFlowContext) Custom Authenticator should at least provide the authenticate(..) function. You can use the javax.script.Bindings script within the code. script the ScriptModel to access script metadata realm the RealmModel user the current UserModel . Note that user is available when your script authenticator is configured in the authentication flow in a way that is triggered after another authenticator succeeded in establishing user identity and set the user into the authentication session. session the active KeycloakSession authenticationSession the current AuthenticationSessionModel httpRequest the current org.jboss.resteasy.spi.HttpRequest LOG a org.jboss.logging.Logger scoped to ScriptBasedAuthenticator Note You can extract additional context information from the context argument passed to the authenticate(context) action(context) function. AuthenticationFlowError = Java.type("org.keycloak.authentication.AuthenticationFlowError"); function authenticate(context) { LOG.info(script.name + " --> trace auth for: " + user.username); if ( user.username === "tester" && user.getAttribute("someAttribute") && user.getAttribute("someAttribute").contains("someValue")) { context.failure(AuthenticationFlowError.INVALID_USER); return; } context.success(); } 6.4.1.1. Where to add script authenticator A possible use of script authenticator is to do some checks at the end of the authentication. Note that if you want your script authenticator to be always triggered (even for instance during SSO re-authentication with the identity cookie), you may need to add it as REQUIRED at the end of the authentication flow and encapsulate the existing authenticators into a separate REQUIRED authentication subflow. This need is because the REQUIRED and ALTERNATIVE executions should not be at the same level. For example, the authentication flow configuration should appear as follows: 6.4.2. Create a JAR with the scripts to deploy Note JAR files are regular ZIP files with a .jar extension. In order to make your scripts available to Red Hat build of Keycloak you need to deploy them to the server. For that, you should create a JAR file with the following structure: The META-INF/keycloak-scripts.json is a file descriptor that provides metadata information about the scripts you want to deploy. It is a JSON file with the following structure: { "authenticators": [ { "name": "My Authenticator", "fileName": "my-script-authenticator.js", "description": "My Authenticator from a JS file" } ], "policies": [ { "name": "My Policy", "fileName": "my-script-policy.js", "description": "My Policy from a JS file" } ], "mappers": [ { "name": "My Mapper", "fileName": "my-script-mapper.js", "description": "My Mapper from a JS file" } ], "saml-mappers": [ { "name": "My Mapper", "fileName": "my-script-mapper.js", "description": "My Mapper from a JS file" } ] } This file should reference the different types of script providers that you want to deploy: authenticators For OpenID Connect Script Authenticators. You can have one or multiple authenticators in the same JAR file policies For JavaScript Policies when using Red Hat build of Keycloak Authorization Services. You can have one or multiple policies in the same JAR file mappers For OpenID Connect Script Protocol Mappers. You can have one or multiple mappers in the same JAR file saml-mappers For SAML Script Protocol Mappers. You can have one or multiple mappers in the same JAR file For each script file in your JAR file, you need a corresponding entry in META-INF/keycloak-scripts.json that maps your scripts files to a specific provider type. For that you should provide the following properties for each entry: name A friendly name that will be used to show the scripts through the Red Hat build of Keycloak Administration Console. If not provided, the name of the script file will be used instead description An optional text that better describes the intend of the script file fileName The name of the script file. This property is mandatory and should map to a file within the JAR. 6.4.3. Deploy the script JAR Once you have a JAR file with a descriptor and the scripts you want to deploy, you just need to copy the JAR to the Red Hat build of Keycloak providers/ directory, then run bin/kc.[sh|bat] build . 6.5. Available SPIs If you want to see list of all available SPIs at runtime, you can check Server Info page in Admin Console as described in Admin Console section.
[ "package org.acme.provider; import public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory { @Override public ThemeSelectorProvider create(KeycloakSession session) { return new MyThemeSelectorProvider(session); } @Override public void init(Config.Scope config) { } @Override public void postInit(KeycloakSessionFactory factory) { } @Override public void close() { } @Override public String getId() { return \"myThemeSelector\"; } }", "package org.acme.provider; import public class MyThemeSelectorProvider implements ThemeSelectorProvider { public MyThemeSelectorProvider(KeycloakSession session) { } @Override public String getThemeName(Theme.Type type) { return \"my-theme\"; } @Override public void close() { } }", "org.acme.provider.MyThemeSelectorProviderFactory", "bin/kc.[sh|bat] --spi-theme-selector-my-theme-selector-enabled=true --spi-theme-selector-my-theme-selector-theme=my-theme", "public void init(Config.Scope config) { String themeName = config.get(\"theme\"); }", "public class MyThemeSelectorProvider implements ThemeSelectorProvider { private KeycloakSession session; public MyThemeSelectorProvider(KeycloakSession session) { this.session = session; } @Override public String getThemeName(Theme.Type type) { return session.getContext().getRealm().getLoginTheme(); } }", "public class CustomOIDCLoginProtocolFactory extends OIDCLoginProtocolFactory { // Some customizations here @Override public int order() { return 1; } }", "package org.acme.provider; import public class MyThemeSelectorProviderFactory implements ThemeSelectorProviderFactory, ServerInfoAwareProviderFactory { @Override public Map<String, String> getOperationalInfo() { Map<String, String> ret = new LinkedHashMap<>(); ret.put(\"theme-name\", \"my-theme\"); return ret; } }", "bin/kc.[sh|bat] build --spi-hostname-provider=default", "bin/kc.[sh|bat] build --spi-user-cache-infinispan-enabled=false", "AuthenticationFlowError = Java.type(\"org.keycloak.authentication.AuthenticationFlowError\"); function authenticate(context) { LOG.info(script.name + \" --> trace auth for: \" + user.username); if ( user.username === \"tester\" && user.getAttribute(\"someAttribute\") && user.getAttribute(\"someAttribute\").contains(\"someValue\")) { context.failure(AuthenticationFlowError.INVALID_USER); return; } context.success(); }", "- User-authentication-subflow REQUIRED -- Cookie ALTERNATIVE -- Identity-provider-redirect ALTERNATIVE - Your-Script-Authenticator REQUIRED", "META-INF/keycloak-scripts.json my-script-authenticator.js my-script-policy.js my-script-mapper.js", "{ \"authenticators\": [ { \"name\": \"My Authenticator\", \"fileName\": \"my-script-authenticator.js\", \"description\": \"My Authenticator from a JS file\" } ], \"policies\": [ { \"name\": \"My Policy\", \"fileName\": \"my-script-policy.js\", \"description\": \"My Policy from a JS file\" } ], \"mappers\": [ { \"name\": \"My Mapper\", \"fileName\": \"my-script-mapper.js\", \"description\": \"My Mapper from a JS file\" } ], \"saml-mappers\": [ { \"name\": \"My Mapper\", \"fileName\": \"my-script-mapper.js\", \"description\": \"My Mapper from a JS file\" } ] }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_developer_guide/providers
4.183. mod_revocator
4.183. mod_revocator 4.183.1. RHBA-2011:1769 - mod_revocator bug fix update An updated mod_revocator package that fixes multiple bugs is now available for Red Hat Enterprise Linux 6. The mod_revocator module retrieves and installs remote Certificate Revocation Lists (CRLs) into an Apache web server. Bug Fixes BZ# 748579 Previously, the code for the httpd daemon shutdown was incorrect and the mod_revocator module did not shut down the httpd daemon when CRL (Certificate Revocation List) update failed on IA-32 architectures. With this update, the code has been fixed and httpd is now closed as expected when CRL update fails. BZ# 748577 Previously, the code for httpd shutdown was incorrect and the mod_revocator module did not shut down the httpd daemon when expired CRLs were fetched. With this update, the code has been fixed and httpd is closed as expected in this scenario. BZ# 749696 Due to an incorrect initialization size of a static array, the httpd daemon with mod_revocator failed to start on 64-bit PowerPC architectures. With this update, the size of the array has been modified and the httpd starts as expected under these circumstances. BZ# 746365 The httpd daemon with the mod_revocator module cannot be used as an HTTP client by default because the SELinux policy prevents such behavior. However, to acquire CRLs from a remote host, the httpd daemon needs to behave as an HTTP client to send HTTP messages to the host. If the behavior was not enabled, child processes of the httpd daemon terminated unexpectedly with segmentation faults when attempting to connect to a remote host. With this update, the underlying code has been changed and the segmentation faults no longer occur. Note To change the SELinux policy and enable httpd to request CRLs from a remote host, execute the "setsebool -P httpd_can_network_connect=1" command as root. All users of mod_revocator are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/mod_revocator
Chapter 1. Overview of Red Hat OpenStack Application and VNF policies
Chapter 1. Overview of Red Hat OpenStack Application and VNF policies Use this guide to understand the prerequisites and environmental testing requirements that are necessary to successfully complete and obtain a Red Hat OpenStack Platform (RHOSP) application certification. This includes applications that depend on RHOSP API's, provide additional functionality in RHOSP cloud, such as a Virtual Network Function (VNF), Network Functions Virtualization (NFV), Management and Orchestration (MANO), and those applications which run on top of a RHOSP environment. It includes the applications that do not implement infrastructure software (plug-in or driver) for use with Red Hat OpenStack Platform in a supported customer environment. 1.1. Audience Red Hat OpenStack Application certification policy guide is intended for Partners who want to certify their system using an Openstack application like Virtual Network Function (VNF), Network Functions Virtualization (NFV), Management and Orchestration (MANO) and others. 1.2. Creating value for our customers Red Hat OpenStack application certification creates value for customers as it ensures that the certified application can be used with RHOSP in addition to making sure the underlying architecture is still supportable after installation of application. The certification process, through a series of tests, validates that a certified solution meets the requirements of an enterprise cloud, and is jointly supported by Red Hat and your organization.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_application_and_vnf_policy_guide/assembly-introduction-vnf-policy_rhosp-policy-guide
Chapter 16. Replacing storage devices
Chapter 16. Replacing storage devices 16.1. Replacing operational or failed storage devices on Red Hat OpenStack Platform installer-provisioned infrastructure Use this procedure to replace storage device in OpenShift Data Foundation which is deployed on Red Hat OpenStack Platform. This procedure helps to create a new Persistent Volume Claim (PVC) on a new volume and remove the old object storage device (OSD). Procedure Identify the OSD that needs to be replaced and the OpenShift Container Platform node that has the OSD scheduled on it. Example output: In this example, rook-ceph-osd-0-6d77d6c7c6-m8xj6 needs to be replaced and compute-2 is the OpenShift Container platform node on which the OSD is scheduled. Note If the OSD to be replaced is healthy, the status of the pod will be Running . Scale down the OSD deployment for the OSD to be replaced. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-0 . Example output: Verify that the rook-ceph-osd pod is terminated. Example output: Note If the rook-ceph-osd pod is in terminating state, use the force option to delete the pod. Example output: Incase, the persistent volume associated with the failed OSD fails, get the failed persistent volumes details and delete them using the following commands: Remove the old OSD from the cluster so that a new OSD can be added. Delete any old ocs-osd-removal jobs. Example output: Change to the openshift-storage project. Remove the old OSD from the cluster. You can add comma separated OSD IDs in the command to remove more than one OSD. (For example, FAILED_OSD_IDS=0,1,2). The FORCE_OSD_REMOVAL value must be changed to "true" in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: If encryption was enabled at the time of install, remove dm-crypt managed device-mapper mapping from the OSD devices that are removed from the respective OpenShift Data Foundation nodes. Get PVC name(s) of the replaced OSD(s) from the logs of ocs-osd-removal-job pod : For example: For each of the nodes identified in step #1, do the following: Create a debug pod and chroot to the host on the storage node. Find relevant device name based on the PVC names identified in the step Remove the mapped device. Note If the above command gets stuck due to insufficient privileges, run the following commands: Press CTRL+Z to exit the above command. Find PID of the process which was stuck. Terminate the process using kill command. Verify that the device name is removed. Delete the ocs-osd-removal job. Example output: Verfication steps Verify that there is a new OSD running. Example output: Verify that there is a new PVC created which is in Bound state. Example output: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in step, do the following: Create a debug pod and open a chroot environment for the selected host(s). Run "lsblk" and check for the "crypt" keyword beside the ocs-deviceset name(s) Log in to OpenShift Web Console and view the storage dashboard. Figure 16.1. OSD status in OpenShift Container Platform storage dashboard after device replacement
[ "oc get -n openshift-storage pods -l app=rook-ceph-osd -o wide", "rook-ceph-osd-0-6d77d6c7c6-m8xj6 0/1 CrashLoopBackOff 0 24h 10.129.0.16 compute-2 <none> <none> rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 24h 10.128.2.24 compute-0 <none> <none> rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 24h 10.130.0.18 compute-1 <none> <none>", "osd_id_to_remove=0 oc scale -n openshift-storage deployment rook-ceph-osd-USD{osd_id_to_remove} --replicas=0", "deployment.extensions/rook-ceph-osd-0 scaled", "oc get -n openshift-storage pods -l ceph-osd-id=USD{osd_id_to_remove}", "No resources found.", "oc delete pod rook-ceph-osd-0-6d77d6c7c6-m8xj6 --force --grace-period=0", "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod \"rook-ceph-osd-0-6d77d6c7c6-m8xj6\" force deleted", "oc get pv oc delete pv <failed-pv-name>", "oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} FORCE_OSD_REMOVAL=false |oc create -n openshift-storage -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 |egrep -i 'pvc|deviceset'", "2021-05-12 14:31:34.666000 I | cephosd: removing the OSD PVC \"ocs-deviceset-xxxx-xxx-xxx-xxx\"", "oc debug node/<node name> chroot /host", "sh-4.4# dmsetup ls| grep <pvc name> ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt (253:0)", "cryptsetup luksClose --debug --verbose ocs-deviceset-xxx-xxx-xxx-xxx-block-dmcrypt", "ps -ef | grep crypt", "kill -9 <PID>", "dmsetup ls", "oc delete -n openshift-storage job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc get -n openshift-storage pods -l app=rook-ceph-osd", "rook-ceph-osd-0-5f7f4747d4-snshw 1/1 Running 0 4m47s rook-ceph-osd-1-85d99fb95f-2svc7 1/1 Running 0 1d20h rook-ceph-osd-2-6c66cdb977-jp542 1/1 Running 0 1d20h", "oc get -n openshift-storage pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-b44ebb5e-3c67-4000-998e-304752deb5a7 50Gi RWO ocs-storagecluster-ceph-rbd 6d ocs-deviceset-0-data-0-gwb5l Bound pvc-bea680cd-7278-463d-a4f6-3eb5d3d0defe 512Gi RWO standard 94s ocs-deviceset-1-data-0-w9pjm Bound pvc-01aded83-6ef1-42d1-a32e-6ca0964b96d4 512Gi RWO standard 6d ocs-deviceset-2-data-0-7bxcq Bound pvc-5d07cd6c-23cb-468c-89c1-72d07040e308 512Gi RWO standard 6d", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/_<OSD-pod-name>_", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/<node name> chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_devices